Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 08 June 2021

Metacognition: ideas and insights from neuro- and educational sciences

  • Damien S. Fleur   ORCID: orcid.org/0000-0003-4836-5255 1 , 2 ,
  • Bert Bredeweg   ORCID: orcid.org/0000-0002-5281-2786 1 , 3 &
  • Wouter van den Bos 2 , 4  

npj Science of Learning volume  6 , Article number:  13 ( 2021 ) Cite this article

32k Accesses

43 Citations

11 Altmetric

Metrics details

  • Human behaviour
  • Interdisciplinary studies

Metacognition comprises both the ability to be aware of one’s cognitive processes (metacognitive knowledge) and to regulate them (metacognitive control). Research in educational sciences has amassed a large body of evidence on the importance of metacognition in learning and academic achievement. More recently, metacognition has been studied from experimental and cognitive neuroscience perspectives. This research has started to identify brain regions that encode metacognitive processes. However, the educational and neuroscience disciplines have largely developed separately with little exchange and communication. In this article, we review the literature on metacognition in educational and cognitive neuroscience and identify entry points for synthesis. We argue that to improve our understanding of metacognition, future research needs to (i) investigate the degree to which different protocols relate to the similar or different metacognitive constructs and processes, (ii) implement experiments to identify neural substrates necessary for metacognition based on protocols used in educational sciences, (iii) study the effects of training metacognitive knowledge in the brain, and (iv) perform developmental research in the metacognitive brain and compare it with the existing developmental literature from educational sciences regarding the domain-generality of metacognition.

Similar content being viewed by others

neuroscience of critical thinking

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

neuroscience of critical thinking

Computer programmers show distinct, expertise-dependent brain responses to violations in form and meaning when reading code

Chu-Hsuan Kuo & Chantel S. Prat

neuroscience of critical thinking

Natural language instructions induce compositional generalization in networks of neurons

Reidar Riveland & Alexandre Pouget

Introduction

Metacognition is defined as “thinking about thinking” or the ability to monitor and control one’s cognitive processes 1 and plays an important role in learning and education 2 , 3 , 4 . For instance, high performers tend to present better metacognitive abilities (especially control) than low performers in diverse educational activities 5 , 6 , 7 , 8 , 9 . Recently, there has been a lot of progress in studying the neural mechanisms of metacognition 10 , 11 , yet it is unclear at this point how these results may inform educational sciences or interventions. Given the potential benefits of metacognition, it is important to get a better understanding of how metacognition works and of how training can be useful.

The interest in bridging cognitive neuroscience and educational practices has increased in the past two decades, spanning a large number of studies grouped under the umbrella term of educational neuroscience 12 , 13 , 14 . With it, researchers have brought forward issues that are viewed as critical for the discipline to improve education. Recurring issues that may impede the relevance of neural insights for educational practices concern external validity 15 , 16 , theoretical discrepancies 17 and differences in terms of the domains of (meta)cognition operationalised (specific or general) 15 . This is important because, in recent years, brain research is starting to orient itself towards training metacognitive abilities that would translate into real-life benefits. However, direct links between metacognition in the brain and metacognition in domains such as education have still to be made. As for educational sciences, a large body of literature on metacognitive training is available, yet we still need clear insights about what works and why. While studies suggest that training metacognitive abilities results in higher academic achievement 18 , other interventions show mixed results 19 , 20 . Moreover, little is known about the long-term effects of, or transfer effects, of these interventions. A better understanding of the cognitive processes involved in metacognition and how they are expressed in the brain may provide insights in these regards.

Within cognitive neuroscience, there has been a long tradition of studying executive functions (EF), which are closely related to metacognitive processes 21 . Similar to metacognition, EF shows a positive relationship with learning at school. For instance, performance in laboratory tasks involving error monitoring, inhibition and working memory (i.e. processes that monitor and regulate cognition) are associated with academic achievement in pre-school children 22 . More recently, researchers have studied metacognition in terms of introspective judgements about performance in a task 10 . Although the neural correlates of such behaviour are being revealed 10 , 11 , little is known about how behaviour during such tasks relates to academic achievement.

Educational and cognitive neuroscientists study metacognition in different contexts using different methods. Indeed, while the latter investigate metacognition via behavioural task, the former mainly rely on introspective questionnaires. The extent to which these different operationalisations of metacognition match and reflect the same processes is unclear. As a result, the external validity of methodologies used in cognitive neuroscience is also unclear 16 . We argue that neurocognitive research on metacognition has a lot of potential to provide insights in mechanisms relevant in educational contexts, and that theoretical and methodological exchange between the two disciplines can benefit neuroscientific research in terms of ecological validity.

For these reasons, we investigate the literature through the lenses of external validity, theoretical discrepancies, domain generality and metacognitive training. Research on metacognition in cognitive neuroscience and educational sciences are reviewed separately. First, we investigate how metacognition is operationalised with respect to the common framework introduced by Nelson and Narens 23 (see Fig. 1 ). We then discuss the existing body of evidence regarding metacognitive training. Finally, we compare findings in both fields, highlight gaps and shortcomings, and propose avenues for research relying on crossovers of the two disciplines.

figure 1

Meta-knowledge is characterised as the upward flow from object-level to meta-level. Meta-control is characterised as the downward flow from meta-level to object-level. Metacognition is therefore conceptualised as the bottom-up monitoring and top-down control of object-level processes. Adapted from Nelson and Narens’ cognitive psychology model of metacognition 23 .

In cognitive neuroscience, metacognition is divided into two main components 5 , 24 , which originate from the seminal works of Flavell on metamemory 25 , 26 . First, metacognitive knowledge (henceforth, meta-knowledge) is defined as the knowledge individuals have of their own cognitive processes and their ability to monitor and reflect on them. Second, metacognitive control (henceforth, meta-control) consists of someone’s self-regulatory mechanisms, such as planning and adapting behaviour based on outcomes 5 , 27 . Following Nelson and Narens’ definition 23 , meta-knowledge is characterised as the flow and processing of information from the object level to the meta-level, and meta-control as the flow from the meta-level to the object level 28 , 29 , 30 (Fig. 1 ). The object-level encompasses cognitive functions such as recognition and discrimination of objects, decision-making, semantic encoding, and spatial representation. On the meta-level, information originating from the object level is processed and top-down regulation on object-level functions is imposed 28 , 29 , 30 .

Educational researchers have mainly investigated metacognition through the lens of Self-Regulated Learning theory (SRL) 3 , 4 , which shares common conceptual roots with the theoretical framework used in cognitive neuroscience but varies from it in several ways 31 . First, SRL is constrained to learning activities, usually within educational settings. Second, metacognition is merely one of three components, with “motivation to learn” and “behavioural processes”, that enable individuals to learn in a self-directed manner 3 . In SRL, metacognition is defined as setting goals, planning, organising, self-monitoring and self-evaluating “at various points during the acquisition” 3 . The distinction between meta-knowledge and meta-control is not formally laid down although reference is often made to a “self-oriented feedback loop” describing the relationship between reflecting and regulating processes that resembles Nelson and Narens’ model (Fig. 1 ) 3 , 23 . In order to facilitate the comparison of operational definitions, we will refer to meta-knowledge in educational sciences when protocols operationalise self-awareness and knowledge of strategies, and to meta-control when they operationalise the selection and use of learning strategies and planning. For an in-depth discussion on metacognition and SRL, we refer to Dinsmore et al. 31 .

Metacognition in cognitive neuroscience

Operational definitions.

In cognitive neuroscience, research in metacognition is split into two tracks 32 . One track mainly studies meta-knowledge by investigating the neural basis of introspective judgements about one’s own cognition (i.e., metacognitive judgements), and meta-control with experiments involving cognitive offloading. In these experiments, subjects can perform actions such as set reminders, making notes and delegating tasks 33 , 34 , or report their desire for them 35 . Some research has investigated how metacognitive judgements can influence subsequent cognitive behaviour (i.e., a downward stream from the meta-level to the object level), but only one study so far has explored how this relationship is mapped in the brain 35 . In the other track, researchers investigate EF, also referred to as cognitive control 30 , 36 , which is closely related to metacognition. Note however that EF are often not framed in metacognitive terms in the literature 37 (but see ref. 30 ). For the sake of concision, we limit our review to operational definitions that have been used in neuroscientific studies.

Metacognitive judgements

Cognitive neuroscientists have been using paradigms in which subjects make judgements on how confident they are with regards to their learning of some given material 10 . These judgements are commonly referred to as metacognitive judgements , which can be viewed as a form of meta-knowledge (for reviews see Schwartz 38 and Nelson 39 ). Historically, researchers mostly resorted to paradigms known as Feelings of Knowing (FOK) 40 and Judgements of Learning (JOL) 41 . FOK reflect the belief of a subject to knowing the answer to a question or a problem and being able to recognise it from a list of alternatives, despite being unable to explicitly recall it 40 . Here, metacognitive judgement is thus made after retrieval attempt. In contrast, JOL are prospective judgements during learning of one’s ability to successfully recall an item on subsequent testing 41 .

More recently, cognitive neuroscientists have used paradigms in which subjects make retrospective metacognitive judgements on their performance in a two-alternative Forced Choice task (2-AFC) 42 . In 2-AFCs, subjects are asked to choose which of two presented options has the highest criterion value. Different domains can be involved, such as perception (e.g., visual or auditory) and memory. For example, subjects may be instructed to visually discriminate which one of two boxes contains more dots 43 , identify higher contrast Gabor patches 44 , or recognise novel words from words that were previously learned 45 (Fig. 2 ). The subjects engage in metacognitive judgements by rating how confident they are relative to their decision in the task. Based on their responses, one can evaluate a subject’s metacognitive sensitivity (the ability to discriminate one’s own correct and incorrect judgements), metacognitive bias (the overall level of confidence during a task), and metacognitive efficiency (the level of metacognitive sensitivity when controlling for task performance 46 ; Fig. 3 ). Note that sensitivity and bias are independent aspects of metacognition, meaning that two subjects may display the same levels of metacognitive sensitivity, but one may be biased towards high confidence while the other is biased towards low confidence. Because metacognitive sensitivity is affected by the difficulty of the task (one subject tends to display greater metacognitive sensitivity in easy tasks than difficult ones and different subjects may find a task more or less easy), metacognitive efficiency is an important measure as it allows researchers to compare metacognitive abilities between subjects and between domains. The most commonly used methods to assess metacognitive sensitivity during retrospective judgements are the receiver operating curve (ROC) and meta- d ′. 46 Both derive from signal detection theory (SDT) 47 which allows Type 1 sensitivity, or d’ ′ (how a subject can discriminate between stimulus alternatives, i.e. object-level processes) to be differentiated from metacognitive sensitivity (a judgement on the correctness of this decision) 48 . Importantly, only comparing meta- d ′ to d ′ seems to give reliable assessments metacognitive efficiency 49 . A ratio of 1 between meta- d’ ′ and d’ ′, indicates that a subject was perfectly able to discriminate between their correct and incorrect judgements. A ratio of 0.8 suggests that 80% of the task-related sensory evidence was available for the metacognitive judgements. Table 1 provides an overview of the different types of tasks and protocols with regards to the type of metacognitive process they operationalise. These operationalisations of meta-knowledge are used in combination with brain imaging methods (functional and structural magnetic resonance imaging; fMRI; MRI) to identify brain regions associated with metacognitive activity and metacognitive abilities 10 , 50 . Alternatively, transcranial magnetic stimulation (TMS) can be used to temporarily deactivate chosen brain regions and test whether this affects metacognitive abilities in given tasks 51 , 52 .

figure 2

a Visual perception task: subjects choose the box containing the most (randomly generated) dots. Subjects then rate their confidence in their decision. b Memory task: subjects learn a list of words. In the next screen, they have to identify which of two words shown was present on the list. The subjects then rate their confidence in their decision.

figure 3

The red and blue curves represent the distribution of confidence ratings for incorrect and correct trials, respectively. A larger distance between the two curves denotes higher sensitivity. Displacement to the left and right denote biases towards low confidence (low metacognitive bias) and high confidence (high metacognitive bias), respectively (retrieved from Fig. 1 in Fleming and Lau 46 ). We repeat the disclaimer of the original authors that this figure is not a statistically accurate description of correct and incorrect responses, which are typically not normally distributed 46 , 47 .

A recent meta-analysis analysed 47 neuroimaging studies on metacognition and identified a domain-general network associated with high vs. low confidence ratings in both decision-making tasks (perception 2-AFC) and memory tasks (JOL, FOK) 11 . This network includes the medial and lateral prefrontal cortex (mPFC and lPFC, respectively), precuneus and insula. In contrast, the right anterior dorsolateral PFC (dlPFC) was specifically involved in decision-making tasks, and the bilateral parahippocampal cortex was specific to memory tasks. In addition, prospective judgements were associated with the posterior mPFC, left dlPFC and right insula, whereas retrospective judgements were associated with bilateral parahippocampal cortex and left inferior frontal gyrus. Finally, emerging evidence suggests a role of the right rostrolateral PFC (rlPFC) 53 , 54 , anterior PFC (aPFC) 44 , 45 , 55 , 56 , dorsal anterior cingulate cortex (dACC) 54 , 55 and precuneus 45 , 55 in metacognitive sensitivity (meta- d ′, ROC). In addition, several studies suggest that the aPFC relates to metacognition specifically in perception-related 2-AFC tasks, whereas the precuneus is engaged specifically in memory-related 2-AFC tasks 45 , 55 , 56 . This may suggest that metacognitive processes engage some regions in a domain-specific manner, while other regions are domain-general. For educational scientists, this could mean that some domains of metacognition may be more relevant for learning and, granted sufficient plasticity of the associated brain regions, that targeting them during interventions may show more substantial benefits. Note that rating one’s confidence and metacognitive sensitivity likely involve additional, peripheral cognitive processes instead of purely metacognitive ones. These regions are therefore associated with metacognition but not uniquely per se. Notably, a recent meta-analysis 50 suggests that domain-specific and domain-general signals may rather share common circuitry, but that their neural signature varies depending on the type of task or activity, showing that domain-generality in metacognition is complex and still needs to be better understood.

In terms of the role of metacognitive judgements on future behaviour, one study found that brain patterns associated with the desire for cognitive offloading (i.e., meta-control) partially overlap with those associated with meta-knowledge (metacognitive judgements of confidence), suggesting that meta-control is driven by either non-metacognitive, in addition to metacognitive, processes or by a combination of different domain-specific meta-knowledge processes 35 .

Executive function

In EF, processes such as error detection/monitoring and effort monitoring can be related to meta-knowledge while error correction, inhibitory control, and resource allocation can be related to meta-control 36 . To activate these processes, participants are asked to perform tasks in laboratory settings such as Flanker tasks, Stroop tasks, Demand Selection tasks and Motion Discrimination tasks (Fig. 4 ). Neural correlates of EF are investigated by having subjects perform such tasks while their brain activity is recorded with fMRI or electroencephalography (EEG). Additionally, patients with brain lesions can be tested against healthy participants to evaluate the functional role of the impaired regions 57 .

figure 4

a Flanker task: subjects indicate the direction to which the arrow in the middle points. b Stroop task: subjects are presented with the name of colour printed in a colour that either matches or mismatches the name. Subjects are asked to give the name of the written colour or the printed colour. c Motion Discrimination task: subjects have to determine in which direction the dots are going with variating levels of noise. d Example of a Demand Selection task: in both options subjects have to switch between two tasks. Task one, subjects determine whether the number shown is higher or lower than 5. Task two, subjects determine whether the number is odd or even. The two options (low and high demand) differ in their degree of task switching, meaning the effort required. Subjects are allowed to switch between the two options. Note, the type of task is solely indicated by the colour of the number and that the subjects are not explicitly told about the difference in effort between the two options (retrieved from Fig. 1c in Froböse et al. 58 ).

In a review article on the neural basis of EF (in which they are defined as meta-control), Shimamura argues that a network of regions composed of the aPFC, ACC, ventrolateral PFC (vlPFC) and dlPFC is involved in the regulations of cognition 30 . These regions are not only interconnected but are also intricately connected to cortical and subcortical regions outside of the PFC. The vlPFC was shown to play an important role in “selecting and maintaining information in working memory”, whereas the dlPFC is involved in “manipulating and updating information in working memory” 30 . The ACC has been proposed to monitor cognitive conflict (e.g. in a Stroop task or a Flanker task), and the dlPFC to regulate it 58 , 59 . In particular, activity in the ACC in conflict monitoring (meta-knowledge) seems to contribute to control of cognition (meta-control) in the dlPFC 60 , 61 and to “bias behavioural decision-making toward cognitively efficient tasks and strategies” (p. 356) 62 . In a recent fMRI study, subjects performed a motion discrimination task (Fig. 4c ) 63 . After deciding on the direction of the motion, they were presented additional motion (i.e. post-decisional evidence) and then were asked to rate their confidence in their initial choice. The post-decisional evidence was encoded in the activity of the posterior medial frontal cortex (pMFC; meta-knowledge), while lateral aPFC (meta-control) modulated the impact of this evidence on subsequent confidence rating 63 . Finally, results from a meta-analysis study on cognitive control identified functional connectivity between the pMFC, associated with monitoring and informing other regions about the need for regulation, and the lPFC that would effectively regulate cognition 64 .

Online vs. offline metacognition

While the processes engaged during tasks such as those used in EF research can be considered as metacognitive in the sense that they are higher-order functions that monitor and control lower cognitive processes, scientists have argued that they are not functionally equivalent to metacognitive judgements 10 , 11 , 65 , 66 . Indeed, engaging in metacognitive judgements requires subjects to reflect on past or future activities. As such, metacognitive judgements can be considered as offline metacognitive processes. In contrast, high-order processes involved in decision-making tasks such as used in EF research are arguably largely made on the fly, or online , at a rapid pace and subjects do not need to reflect on their actions to perform them. Hence, we propose to explicitly distinguish online and offline processes. Other researchers have shared a similar view and some have proposed models for metacognition that make similar distinctions 65 , 66 , 67 , 68 . The functional difference between online and offline metacognition is supported by some evidence. For instance, event-related brain potential (ERP) studies suggest that error negativities are associated with error detection in general, whereas an increased error positivity specifically encodes error that subjects could report upon 69 , 70 . Furthermore, brain-imaging studies suggest that the MFC and ACC are involved in online meta-knowledge, while the aPFC and lPFC seem to be activated when subjects engage in more offline meta-knowledge and meta-control, respectively 63 , 71 , 72 . An overview of the different tasks can be found in Table 1 and a list of different studies on metacognition can be found in Supplementary Table 1 (organised in terms of the type of processes investigated, the protocols and brain measures used, along with the brain regions identified). Figure 5 illustrates the different brain regions associated with meta-knowledge and meta-control, distinguishing between what we consider to be online and offline processes. This distinction is often not made explicitly but it will be specifically helpful when building bridges between cognitive neuroscience and educational sciences.

figure 5

The regions are divided into online meta-knowledge and meta-control, and offline meta-knowledge and meta-control following the distinctions introduced earlier. Some regions have been reported to be related to both offline and online processes and are therefore given a striped pattern.

Training metacognition

There are extensive accounts in the literature of efforts to improve EF components such as inhibitory control, attention shifting and working memory 22 . While working memory does not directly reflect metacognitive abilities, its training is often hypothesised to improve general cognitive abilities and academic achievement. However, most meta-analyses found that training methods lead only to weak, non-lasting effects on cognitive control 73 , 74 , 75 . One meta-analysis did find evidence of near-transfer following EF training in children (in particular working memory, inhibitory control and cognitive flexibility), but found no evidence of far-transfer 20 . According to this study, training on one component leads to improved abilities in that same component but not in other EF components. Regarding adults, however, one meta-analysis suggests that EF training in general and working memory training specifically may both lead to significant near- and far-transfer effects 76 . On a neural level, a meta-analysis showed that cognitive training resulted in decreased brain activity in brain regions associated with EF 77 . According to the authors, this indicates that “training interventions reduce demands on externally focused attention” (p. 193) 77 .

With regards to meta-knowledge, several studies have reported increased task-related metacognitive abilities after training. For example, researchers found that subjects who received feedback on their metacognitive judgements regarding a perceptual decision-making task displayed better metacognitive accuracy, not only in the trained task but also in an untrained memory task 78 . Related, Baird and colleagues 79 found that a two-week mindfulness meditation training lead to enhanced meta-knowledge in the memory domain, but not the perceptual domain. The authors link these results to evidence of increased grey matter density in the aPFC in meditation practitioners.

Research on metacognition in cognitive science has mainly been studied through the lens of metacognitive judgements and EF (specifically performance monitoring and cognitive control). Meta-knowledge is commonly activated in subjects by asking them to rate their confidence in having successfully performed a task. A distinction is made between metacognitive sensitivity, metacognitive bias and metacognitive efficacy. Monitoring and regulating processes in EF are mainly operationalised with behavioural tasks such as Flanker tasks, Stroop tasks, Motion Discrimination tasks and Demand Selection tasks. In addition, metacognitive judgements can be viewed as offline processes in that they require the subject to reflect on her cognition and develop meta-representations. In contrast, EF can be considered as mostly online metacognitive processes because monitoring and regulation mostly happen rapidly without the need for reflective thinking.

Although there is some evidence for domain specificity, other studies have suggested that there is a single network of regions involved in all meta-cognitive tasks, but differentially activated in different task contexts. Comparing research on meta-knowledge and meta-control also suggest that some regions play a crucial role in both knowledge and regulation (Fig. 5 ). We have also identified a specific set of regions that are involved in either offline or online meta-knowledge. The evidence in favour of metacognitive training, while mixed, is interesting. In particular, research on offline meta-knowledge training involving self-reflection and metacognitive accuracy has shown some promising results. The regions that show structural changes after training, were those that we earlier identified as being part of the metacognition network. EF training does seem to show far-transfer effects at least in adults, but the relevance for everyday life activity is still unclear.

One major limitation of current research in metacognition is ecological validity. It is unclear to what extent the operationalisations reviewed above reflect real-life metacognition. For instance, are people who can accurately judge their performance on a behavioural task also able to accurately assess how they performed during an exam? Are people with high levels of error regulation and inhibitory control able to learn more efficiently? Note that criticism on the ecological validity of neurocognitive operationalisations extends beyond metacognition research 16 . A solution for improving validity may be to compare operationalisations of metacognition in cognitive neuroscience with the ones in educational sciences, which have shown clear links with learning in formal education. This also applies to metacognitive training.

Metacognition in educational sciences

The most popular protocols used to measure metacognition in educational sciences are self-report questionnaires or interviews, learning journals and thinking-aloud protocols 31 , 80 . During interviews, subjects are asked to answer questions regarding hypothetical situations 81 . In learning journals, students write about their learning experience and their thoughts on learning 82 , 83 . In thinking-aloud protocols, subjects are asked to verbalise their thoughts while performing a problem-solving task 80 . Each of these instruments can be used to study meta-knowledge and meta-control. For instance, one of the most widely used questionnaires, the Metacognitive Awareness Inventory (MAI) 42 , operationalises “Flavellian” metacognition and has dedicated scales for meta-knowledge and meta-control (also popular are the MSLQ 84 and LASSI 85 which operate under SRL). The meta-knowledge scale of the MAI operationalises knowledge of strategies (e.g., “ I am aware of what strategies I use when I study ”) and self-awareness (e.g., “ I am a good judge of how well I understand something ”); the meta-control scale operationalises planning (e.g., “ I set a goal before I begin a task ”) and use of learning strategies (e.g., “ I summarize what I’ve learned after I finish ”). Learning journals, self-report questionnaires and interviews involve offline metacognition. Thinking aloud, though not engaging the same degree self-reflection, also involves offline metacognition in the sense that online processes are verbalised, which necessitate offline processing (see Table 1 for an overview and Supplementary Table 2 for more details).

More recently, methodologies borrowed from cognitive neuroscience have been introduced to study EF in educational settings 22 , 86 . In particular, researchers used classic cognitive control tasks such as the Stroop task (for a meta-analysis 86 ). Most of the studied components are related to meta-control and not meta-knowledge. For instance, the BRIEF 87 is a questionnaire completed by parents and teachers which assesses different subdomains of EF: (1) inhibition, shifting, and emotional control which can be viewed as online metacognitive control, and (2) planning, organisation of materials, and monitoring, which can be viewed as offline meta-control 87 .

Assessment of metacognition is usually compared against metrics of academic performance such as grades or scores on designated tasks. A recent meta-analysis reported a weak correlation of self-report questionnaires and interviews with academic performance whereas think-aloud protocols correlated highly 88 . Offline meta-knowledge processes operationalised by learning journals were found to be positively associated with academic achievement when related to reflection on learning activities but negatively associated when related to reflection on learning materials, indicating that the type of reflection is important 89 . EF have been associated with abilities in mathematics (mainly) and reading comprehension 86 . However, the literature points towards contrary directions as to what specific EF component is involved in academic achievement. This may be due to the different groups that were studied, to different operationalisations or to different theoretical underpinnings for EF 86 . For instance, online and offline metacognitive processes, which are not systematically distinguished in the literature, may play different roles in academic achievement. Moreover, the bulk of research focussed on young children with few studies on adolescents 86 and EF may play a role at varying extents at different stages of life.

A critical question in educational sciences is that of the nature of the relationship between metacognition and academic achievement to understand whether learning at school can be enhanced by training metacognitive abilities. Does higher metacognition lead to higher academic achievement? Do these features evolve in parallel? Developmental research provides valuable insights into the formation of metacognitive abilities that can inform training designs in terms of what aspect of metacognition should be supported and the age at which interventions may yield the best results. First, meta-knowledge seems to emerge around the age of 5, meta-control around 8, and both develop over the years 90 , with evidence for the development of meta-knowledge into adolescence 91 . Furthermore, current theories propose that meta-knowledge abilities are initially highly domain-dependent and gradually become more domain-independent as knowledge and experience are acquired and linked between domains 32 . Meta-control is believed to evolve in a similar fashion 90 , 92 .

Common methods used to train offline metacognition are direct instruction of metacognition, metacognitive prompts and learning journals. In addition, research has been done on the use of (self-directed) feedback as a means to induce self-reflection in students, mainly in computer-supported settings 93 . Interestingly, learning journals appear to be used for both assessing and fostering metacognition. Metacognitive instruction consists of teaching learners’ strategies to “activate” their metacognition. Metacognitive prompts most often consist of text pieces that are sent at specific times and that trigger reflection (offline meta-knowledge) on learning behaviour in the form of a question, hint or reminder.

Meta-analyses have investigated the effects of direct metacognitive instruction on students’ use of learning strategies and academic outcomes 18 , 94 , 95 . Their findings show that metacognitive instruction can have a positive effect on learning abilities and achievement within a population ranging from primary schoolers to university students. In particular, interventions lead to the highest effect sizes when they both (i) instructed a combination of metacognitive strategies with an emphasis on planning strategies (offline meta-control) and (ii) “provided students with knowledge about strategies” (offline meta-knowledge) and “illustrated the benefits of applying the trained strategies, or even stimulated metacognitive reasoning” (p.114) 18 . The longer the duration of the intervention, the more effective they were. The strongest effects on academic performance were observed in the context of mathematics, followed by reading and writing.

While metacognitive prompts and learning journals make up the larger part of the literature on metacognitive training 96 , meta-analyses that specifically investigate their effectiveness have yet to be performed. Nonetheless, evidence suggests that such interventions can be successful. Researchers found that metacognitive prompts fostered the use of metacognitive strategies (offline meta-control) and that the combination of cognitive and metacognitive prompts improved learning outcomes 97 . Another experiment showed that students who received metacognitive prompts performed more metacognitive activities inside the learning environment and displayed better transfer performance immediately after the intervention 98 . A similar study using self-directed prompts showed enhanced transfer performance that was still observable 3 weeks after the intervention 99 .

Several studies suggest that learning journals can positively enhance metacognition. Subjects who kept a learning journal displayed stronger high meta-control and meta-knowledge on learning tasks and tended to reach higher academic outcomes 100 , 101 , 102 . However, how the learning journal is used seems to be critical; good instructions are crucial 97 , 103 , and subjects who simply summarise their learning activity benefit less from the intervention than subjects who reflect about their knowledge, learning and learning goals 104 . An overview of studies using learning journals and metacognitive prompts to train metacognition can be found in Supplementary Table 3 .

In recent years, educational neuroscience researchers have tried to determine whether training and improvements in EF can lead to learning facilitation and higher academic achievement. Training may consist of having students continually perform behavioural tasks either in the lab, at home, or at school. Current evidence in favour of training EF is mixed, with only anecdotal evidence for positive effects 105 . A meta-analysis did not show evidence for a causal relationship between EF and academic achievement 19 , but suggested that the relationship is bidirectional, meaning that the two are “mutually supportive” 106 .

A recent review article has identified several gaps and shortcoming in the literature on metacognitive training 96 . Overall, research in metacognitive training has been mainly invested in developing learners’ meta-control rather than meta-knowledge. Furthermore, most of the interventions were done in the context of science learning. Critically, there appears to be a lack of studies that employed randomised control designs, such that the effects of metacognitive training intervention are often difficult to evaluate. In addition, research overwhelmingly investigated metacognitive prompts and learning journals in adults 96 , while interventions on EF mainly focused on young children 22 . Lastly, meta-analyses evaluating the effectiveness of metacognitive training have so far focused on metacognitive instruction on children. There is thus a clear disbalance between the meta-analyses performed and the scope of the literature available.

An important caveat of educational sciences research is that metacognition is not typically framed in terms of online and offline metacognition. Therefore, it can be unclear whether protocols operationalise online or offline processes and whether interventions tend to benefit more online or offline metacognition. There is also confusion in terms of what processes qualify as EF and definitions of it vary substantially 86 . For instance, Clements and colleagues mention work on SRL to illustrate research in EF in relation to academic achievement but the two spawn from different lines of research, one rooted in metacognition and socio-cognitive theory 31 and the other in the cognitive (neuro)science of decision-making. In addition, the MSLQ, as discussed above, assesses offline metacognition along with other components relevant to SRL, whereas EF can be mainly understood as online metacognition (see Table 1 ), which on the neural level may rely on different circuitry.

Investigating offline metacognition tends to be carried out in school settings whereas evaluating EF (e.g., Stroop task, and BRIEF) is performed in the lab. Common to all protocols for offline metacognition is that they consist of a form of self-report from the learner, either during the learning activity (thinking-aloud protocols) or after the learning activity (questionnaires, interviews and learning journals). Questionnaires are popular protocols due to how easy they are to administer but have been criticised to provide biased evaluations of metacognitive abilities. In contrast, learning journals evaluate the degree to which learners engage in reflective thinking and may therefore be less prone to bias. Lastly, it is unclear to what extent thinking-aloud protocols are sensitive to online metacognitive processes, such as on-the-fly error correction and effort regulation. The strength of the relationship between metacognitive abilities and academic achievement varies depending on how metacognition is operationalised. Self-report questionnaires and interviews are weakly related to achievement whereas thinking-aloud protocols and EF are strongly related to it.

Based on the well-documented relationship between metacognition and academic achievement, educational scientists hypothesised that fostering metacognition may improve learning and academic achievement, and thus performed metacognitive training interventions. The most prevalent training protocols are direct metacognitive instruction, learning journals, and metacognitive prompts, which aim to induce and foster offline metacognitive processes such as self-reflection, planning and selecting learning strategies. In addition, researchers have investigated whether training EF, either through tasks or embedded in the curriculum, results in higher academic proficiency and achievement. While a large body of evidence suggests that metacognitive instruction, learning journals and metacognitive prompts can successfully improve academic achievement, interventions designed around EF training show mixed results. Future research investigating EF training in different age categories may clarify this situation. These various degrees of success of interventions may indicate that offline metacognition is more easily trainable than online metacognition and plays a more important role in educational settings. Investigating the effects of different methods, offline and online, on the neural level, may provide researchers with insights into the trainability of different metacognitive processes.

In this article, we reviewed the literature on metacognition in educational sciences and cognitive neuroscience with the aim to investigate gaps in current research and propose ways to address them through the exchange of insights between the two disciplines and interdisciplinary approaches. The main aspects analysed were operational definitions of metacognition and metacognitive training, through the lens of metacognitive knowledge and metacognitive control. Our review also highlighted an additional construct in the form of the distinction between online metacognition (on the fly and largely automatic) and offline metacognition (slower, reflective and requiring meta-representations). In cognitive neuroscience, research has focused on metacognitive judgements (mainly offline) and EF (mainly online). Metacognition is operationalised with tasks carried out in the lab and are mapped onto brain functions. In contrast, research in educational sciences typically measures metacognition in the context of learning activities, mostly in schools and universities. More recently, EF has been studied in educational settings to investigate its role in academic achievement and whether training it may benefit learning. Evidence on the latter is however mixed. Regarding metacognitive training in general, evidence from both disciplines suggests that interventions fostering learners’ self-reflection and knowledge of their learning behaviour (i.e., offline meta-knowledge) may best benefit them and increase academic achievement.

We focused on four aspects of research that could benefit from an interdisciplinary approach between the two areas: (i) validity and reliability of research protocols, (ii) under-researched dimensions of metacognition, (iii) metacognitive training, and (iv) domain-specificity vs. domain generality of metacognitive abilities. To tackle these issue, we propose four avenues for integrated research: (i) investigate the degree to which different protocols relate to similar or different metacognitive constructs, (ii) implement designs and perform experiments to identify neural substrates necessary for offline meta-control by for example borrowing protocols used in educational sciences, (iii) study the effects of (offline) meta-knowledge training on the brain, and (iv) perform developmental research in the metacognitive brain and compare it with the existing developmental literature in educational sciences regarding the domain-generality of metacognitive processes and metacognitive abilities.

First, neurocognitive research on metacognitive judgements has developed robust operationalisations of offline meta-knowledge. However, these operationalisations often consist of specific tasks (e.g., 2-AFC) carried out in the lab. These tasks are often very narrow and do not resemble the challenges and complexities of behaviours associated with learning in schools and universities. Thus, one may question to what extent they reflect real-life metacognition, and to what extent protocols developed in educational sciences and cognitive neuroscience actually operationalise the same components of metacognition. We propose that comparing different protocols from both disciplines that are, a priori, operationalising the same types of metacognitive processes can help evaluate the ecological validity of protocols used in cognitive neuroscience, and allow for more holistic assessments of metacognition, provided that it is clear which protocol assesses which construct. Degrees of correlation between different protocols, within and between disciplines, may allow researchers to assess to what extent they reflect the same metacognitive constructs and also identify what protocols are most appropriate to study a specific construct. For example, a relation between meta- d ′ metacognitive sensitivity in a 2-AFC task and the meta-knowledge subscale of the MAI, would provide external validity to the former. Moreover, educational scientists would be provided with bias-free tools to assess metacognition. These tools may enable researchers to further investigate to what extent metacognitive bias, sensitivity and efficiency each play a role in education settings. In contrast, a low correlation may highlight a difference in domain between the two measures of metacognition. For instance, metacognitive judgements in brain research are made in isolated behaviour, and meta-d’ can thus be viewed to reflect “local” metacognitive sensitivity. It is also unclear to what extent processes involved in these decision-making tasks cover those taking place in a learning environment. When answering self-reported questionnaires, however, subjects make metacognitive judgements on a large set of (learning) activities, and the measures may thus resemble more “global” or domain-general metacognitive sensitivity. In addition, learners in educational settings tend to receive feedback — immediate or delayed — on their learning activities and performance, which is generally not the case for cognitive neuroscience protocols. Therefore, investigating metacognitive judgements in the presence of performance or social feedback may allow researchers to better understand the metacognitive processes at play in educational settings. Devising a global measure of metacognition in the lab by aggregating subjects’ metacognitive abilities in different domains or investigating to what extent local metacognition may affect global metacognition could improve ecological validity significantly. By investigating the neural correlates of educational measures of metacognition, researchers may be able to better understand to what extent the constructs studied in the two disciplines are related. It is indeed possible that, though weakly correlated, the meta-knowledge scale of the MAI and meta-d’ share a common neural basis.

Second, our review highlights gaps in the literature of both disciplines regarding the research of certain types of metacognitive processes. There is a lack of research in offline meta-control (or strategic regulation of cognition) in neuroscience, whereas this construct is widely studied in educational sciences. More specifically, while there exists research on EF related to planning (e.g. 107 ), common experimental designs make it hard to disentangle online from offline metacognitive processes. A few studies have implemented subject reports (e.g., awareness of error or desire for reminders) to pin-point the neural substrates specifically involved in offline meta-control and the current evidence points at a role of the lPFC. More research implementing similar designs may clarify this construct. Alternatively, researchers may exploit educational sciences protocols, such as self-report questionnaires, learning journals, metacognitive prompts and feedback to investigate offline meta-control processes in the brain and their relation to academic proficiency and achievement.

Third, there is only one study known to us on the training of meta-knowledge in the lab 78 . In contrast, meta-knowledge training in educational sciences have been widely studied, in particular with metacognitive prompts and learning journals, although a systematic review would be needed to identify the benefits for learning. Relative to cognitive neuroscience, studies suggest that offline meta-knowledge trained in and outside the lab (i.e., metacognitive judgements and meditation, respectively) transfer to meta-knowledge in other lab tasks. The case of meditation is particularly interesting since meditation has been demonstrated to beneficiate varied aspects of everyday life 108 . Given its importance for efficient regulation of cognition, training (offline) meta-knowledge may present the largest benefits to academic achievement. Hence, it is important to investigate development in the brain relative to meta-knowledge training. Evidence on metacognitive training in educational sciences tends to suggest that offline metacognition is more “plastic” and may therefore benefit learning more than online metacognition. Furthermore, it is important to have a good understanding of the developmental trajectory of metacognitive abilities — not only on a behavioural level but also on a neural level — to identify critical periods for successful training. Doing so would also allow researchers to investigate the potential differences in terms of plasticity that we mention above. Currently, the developmental trajectory of metacognition is under-studied in cognitive neuroscience with only one study that found an overlap between the neural correlates of metacognition in adults and children 109 . On a side note, future research could explore the potential role of genetic factors in metacognitive abilities to better understand to what extent and under what constraints they can be trained.

Fourth, domain-specific and domain-general aspects of metacognitive processes should be further investigated. Educational scientists have studied the development of metacognition in learners and have concluded that metacognitive abilities are domain-specific at the beginning (meaning that their quality depends on the type of learning activity, like mathematics vs. writing) and progressively evolve towards domain-general abilities as knowledge and expertise increase. Similarly, neurocognitive evidence points towards a common network for (offline) metacognitive knowledge which engages the different regions at varying degrees depending on the domain of the activity (i.e., perception, memory, etc.). Investigating this network from a developmental perspective and comparing findings with the existing behavioural literature may improve our understanding of the metacognitive brain and link the two bodies of evidence. It may also enable researchers to identify stages of life more suitable for certain types of metacognitive intervention.

Dunlosky, J. & Metcalfe, J. Metacognition (SAGE Publications, 2008).

Pintrich, P. R. The role of metacognitive knowledge in learning, teaching, and assessing. Theory Into Pract. 41 , 219–225 (2002).

Article   Google Scholar  

Zimmerman, B. J. Self-regulated learning and academic achievement: an overview. Educ. Psychol. 25 , 3–17 (1990).

Zimmerman, B. J. & Schunk, D. H. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives (Routledge, 2001).

Baker, L. & Brown, A. L. Metacognitive Skills and Reading. In Handbook of Reading Research Vol. 1 (ed. Pearson, P. D.) 353–395 (Longman, 1984).

Mckeown, M. G. & Beck, I. L. The role of metacognition in understanding and supporting reading comprehension. In Handbook of Metacognition in Education (eds Hacker, D. J., Dunlosky, J. & Graesser, A. C.) 19–37 (Routledge, 2009).

Desoete, A., Roeyers, H. & Buysse, A. Metacognition and mathematical problem solving in grade 3. J. Learn. Disabil. 34 , 435–447 (2001).

Article   CAS   PubMed   Google Scholar  

Veenman, M., Kok, R. & Blöte, A. W. The relation between intellectual and metacognitive skills in early adolescence. Instructional Sci. 33 , 193–211 (2005).

Harris, K. R., Graham, S., Brindle, M. & Sandmel, K. Metacognition and children’s writing. In Handbook of metacognition in education 131–153 (Routledge, 2009).

Fleming, S. M. & Dolan, R. J. The neural basis of metacognitive ability. Philos. Trans. R. Soc. B 367 , 1338–1349 (2012).

Vaccaro, A. G. & Fleming, S. M. Thinking about thinking: a coordinate-based meta-analysis of neuroimaging studies of metacognitive judgements. Brain Neurosci. Adv. 2 , 10.1177%2F2398212818810591 (2018).

Ferrari, M. What can neuroscience bring to education? Educ. Philos. Theory 43 , 31–36 (2011).

Zadina, J. N. The emerging role of educational neuroscience in education reform. Psicol. Educ. 21 , 71–77 (2015).

Meulen, A., van der, Krabbendam, L. & Ruyter, Dde Educational neuroscience: its position, aims and expectations. Br. J. Educ. Stud. 63 , 229–243 (2015).

Varma, S., McCandliss, B. D. & Schwartz, D. L. Scientific and pragmatic challenges for bridging education and neuroscience. Educ. Res. 37 , 140–152 (2008).

van Atteveldt, N., van Kesteren, M. T. R., Braams, B. & Krabbendam, L. Neuroimaging of learning and development: improving ecological validity. Frontline Learn. Res. 6 , 186–203 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Hruby, G. G. Three requirements for justifying an educational neuroscience. Br. J. Educ. Psychol. 82 , 1–23 (2012).

Article   PubMed   Google Scholar  

Dignath, C., Buettner, G. & Langfeldt, H.-P. How can primary school students learn self-regulated learning strategies most effectively?: A meta-analysis on self-regulation training programmes. Educ. Res. Rev. 3 , 101–129 (2008).

Jacob, R. & Parkinson, J. The potential for school-based interventions that target executive function to improve academic achievement: a review. Rev. Educ. Res. 85 , 512–552 (2015).

Kassai, R., Futo, J., Demetrovics, Z. & Takacs, Z. K. A meta-analysis of the experimental evidence on the near- and far-transfer effects among children’s executive function skills. Psychol. Bull. 145 , 165–188 (2019).

Roebers, C. M. Executive function and metacognition: towards a unifying framework of cognitive self-regulation. Dev. Rev. 45 , 31–51 (2017).

Clements, D. H., Sarama, J. & Germeroth, C. Learning executive function and early mathematics: directions of causal relations. Early Child. Res. Q. 36 , 79–90 (2016).

Nelson, T. O. & Narens, L. Metamemory. In Perspectives on the development of memory and cognition (ed. R. V. Kail & J. W. Hag) 3–33 (Hillsdale, N.J.: Erlbaum, 1977).

Baird, J. R. Improving learning through enhanced metacognition: a classroom study. Eur. J. Sci. Educ. 8 , 263–282 (1986).

Flavell, J. H. & Wellman, H. M. Metamemory (1975).

Flavell, J. H. Metacognition and cognitive monitoring: a new area of cognitive–developmental inquiry. Am. Psychol. 34 , 906 (1979).

Livingston, J. A. Metacognition: An Overview. (2003).

Nelson, T. O. Metamemory: a theoretical framework and new findings. In Psychology of Learning and Motivation Vol. 26 (ed. Bower, G. H.) 125–173 (Academic Press, 1990).

Nelson, T. O. & Narens, L. Why investigate metacognition. In Metacognition: Knowing About Knowing (eds Metcalfe, J. & Shimamura, A. P.) 1–25 (MIT Press, 1994).

Shimamura, A. P. A Neurocognitive approach to metacognitive monitoring and control. In Handbook of Metamemory and Memory (eds Dunlosky, J. & Bjork, R. A.) (Routledge, 2014).

Dinsmore, D. L., Alexander, P. A. & Loughlin, S. M. Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educ. Psychol. Rev. 20 , 391–409 (2008).

Borkowski, J. G., Chan, L. K. & Muthukrishna, N. A process-oriented model of metacognition: links between motivation and executive functioning. In (Gregory Schraw & James C. Impara) Issues in the Measurement of Metacognition 1–42 (Buros Institute of Mental Measurements, 2000).

Risko, E. F. & Gilbert, S. J. Cognitive offloading. Trends Cogn. Sci. 20 , 676–688 (2016).

Gilbert, S. J. et al. Optimal use of reminders: metacognition, effort, and cognitive offloading. J. Exp. Psychol. 149 , 501 (2020).

Boldt, A. & Gilbert, S. Distinct and overlapping neural correlates of metacognitive monitoring and metacognitive control. Preprint at bioRxiv https://psyarxiv.com/3dz9b/ (2020).

Fernandez-Duque, D., Baird, J. A. & Posner, M. I. Executive attention and metacognitive regulation. Conscious Cogn. 9 , 288–307 (2000).

Baker, L., Zeliger-Kandasamy, A. & DeWyngaert, L. U. Neuroimaging evidence of comprehension monitoring. Psihol. teme 23 , 167–187 (2014).

Google Scholar  

Schwartz, B. L. Sources of information in metamemory: Judgments of learning and feelings of knowing. Psychon. Bull. Rev. 1 , 357–375 (1994).

Nelson, T. O. Metamemory, psychology of. In International Encyclopedia of the Social & Behavioral Sciences (eds Smelser, N. J. & Baltes, P. B.) 9733–9738 (Pergamon, 2001).

Hart, J. T. Memory and the feeling-of-knowing experience. J. Educ. Psychol. 56 , 208 (1965).

Arbuckle, T. Y. & Cuddy, L. L. Discrimination of item strength at time of presentation. J. Exp. Psychol. 81 , 126 (1969).

Fechner, G. T. Elemente der Psychophysik (Breitkopf & Härtel, 1860).

Rouault, M., Seow, T., Gillan, C. M. & Fleming, S. M. Psychiatric symptom dimensions are associated with dissociable shifts in metacognition but not task performance. Biol. Psychiatry 84 , 443–451 (2018).

Fleming, S. M., Weil, R. S., Nagy, Z., Dolan, R. J. & Rees, G. Relating introspective accuracy to individual differences in brain structure. Science 329 , 1541–1543 (2010).

Article   CAS   PubMed   PubMed Central   Google Scholar  

McCurdy, L. Y. et al. Anatomical coupling between distinct metacognitive systems for memory and visual perception. J. Neurosci. 33 , 1897–1906 (2013).

Fleming, S. M. & Lau, H. C. How to measure metacognition. Front. Hum. Neurosci. 8 https://doi.org/10.3389/fnhum.2014.00443 (2014).

Galvin, S. J., Podd, J. V., Drga, V. & Whitmore, J. Type 2 tasks in the theory of signal detectability: discrimination between correct and incorrect decisions. Psychon. Bull. Rev. 10 , 843–876 (2003).

Metcalfe, J. & Schwartz, B. L. The ghost in the machine: self-reflective consciousness and the neuroscience of metacognition. In (eds Dunlosky, J. & Tauber, S. K.) Oxford Handbook of Metamemory 407–424 (Oxford University Press, 2016).

Maniscalco, B. & Lau, H. A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Conscious Cognition 21 , 422–430 (2012).

Rouault, M., McWilliams, A., Allen, M. G. & Fleming, S. M. Human metacognition across domains: insights from individual differences and neuroimaging. Personal. Neurosci. 1 https://doi.org/10.1017/pen.2018.16 (2018).

Rounis, E., Maniscalco, B., Rothwell, J. C., Passingham, R. E. & Lau, H. Theta-burst transcranial magnetic stimulation to the prefrontal cortex impairs metacognitive visual awareness. Cogn. Neurosci. 1 , 165–175 (2010).

Ye, Q., Zou, F., Lau, H., Hu, Y. & Kwok, S. C. Causal evidence for mnemonic metacognition in human precuneus. J. Neurosci. 38 , 6379–6387 (2018).

Fleming, S. M., Huijgen, J. & Dolan, R. J. Prefrontal contributions to metacognition in perceptual decision making. J. Neurosci. 32 , 6117–6125 (2012).

Morales, J., Lau, H. & Fleming, S. M. Domain-general and domain-specific patterns of activity supporting metacognition in human prefrontal cortex. J. Neurosci. 38 , 3534–3546 (2018).

Baird, B., Smallwood, J., Gorgolewski, K. J. & Margulies, D. S. Medial and lateral networks in anterior prefrontal cortex support metacognitive ability for memory and perception. J. Neurosci. 33 , 16657–16665 (2013).

Fleming, S. M., Ryu, J., Golfinos, J. G. & Blackmon, K. E. Domain-specific impairment in metacognitive accuracy following anterior prefrontal lesions. Brain 137 , 2811–2822 (2014).

Baldo, J. V., Shimamura, A. P., Delis, D. C., Kramer, J. & Kaplan, E. Verbal and design fluency in patients with frontal lobe lesions. J. Int. Neuropsychol. Soc. 7 , 586–596 (2001).

Froböse, M. I. et al. Catecholaminergic modulation of the avoidance of cognitive control. J. Exp. Psychol. Gen. 147 , 1763 (2018).

Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S. & Cohen, J. D. Conflict monitoring and cognitive control. Psychol. Rev. 108 , 624 (2001).

Kerns, J. G. et al. Anterior cingulate conflict monitoring and adjustments in control. Science 303 , 1023–1026 (2004).

Yeung, N. Conflict monitoring and cognitive control. In The Oxford Handbook of Cognitive Neuroscience: The Cutting Edges Vol. 2 (eds Ochsner, K. N. & Kosslyn, S.) 275–299 (Oxford University Press, 2014).

Botvinick, M. M. Conflict monitoring and decision making: reconciling two perspectives on anterior cingulate function. Cogn. Affect. Behav. Neurosci. 7 , 356–366 (2007).

Fleming, S. M., van der Putten, E. J. & Daw, N. D. Neural mediators of changes of mind about perceptual decisions. Nat. Neurosci. 21 , 617–624 (2018).

Ridderinkhof, K. R., Ullsperger, M., Crone, E. A. & Nieuwenhuis, S. The role of the medial frontal cortex in cognitive control. Science 306 , 443–447 (2004).

Koriat, A. The feeling of knowing: some metatheoretical implications for consciousness and control. Conscious Cogn. 9 , 149–171 (2000).

Thompson, V. A., Evans, J. & Frankish, K. Dual process theories: a metacognitive perspective. Ariel 137 , 51–43 (2009).

Arango-Muñoz, S. Two levels of metacognition. Philosophia 39 , 71–82 (2011).

Shea, N. et al. Supra-personal cognitive control and metacognition. Trends Cogn. Sci. 18 , 186–193 (2014).

Nieuwenhuis, S., Ridderinkhof, K. R., Blom, J., Band, G. P. & Kok, A. Error-related brain potentials are differentially related to awareness of response errors: evidence from an antisaccade task. Psychophysiology 38 , 752–760 (2001).

Overbeek, T. J., Nieuwenhuis, S. & Ridderinkhof, K. R. Dissociable components of error processing: on the functional significance of the Pe vis-à-vis the ERN/Ne. J. Psychophysiol. 19 , 319–329 (2005).

McGuire, J. T. & Botvinick, M. M. Prefrontal cortex, cognitive control, and the registration of decision costs. Proc. Natl Acad. Sci. USA 107 , 7922–7926 (2010).

Hester, R., Foxe, J. J., Molholm, S., Shpaner, M. & Garavan, H. Neural mechanisms involved in error processing: a comparison of errors made with and without awareness. Neuroimage 27 , 602–608 (2005).

Melby-Lervåg, M. & Hulme, C. Is working memory training effective? A meta-analytic review. Dev. Psychol. 49 , 270 (2013).

Soveri, A., Antfolk, J., Karlsson, L., Salo, B. & Laine, M. Working memory training revisited: a multi-level meta-analysis of n-back training studies. Psychon. Bull. Rev. 24 , 1077–1096 (2017).

Schwaighofer, M., Fischer, F. & Bühner, M. Does working memory training transfer? A meta-analysis including training conditions as moderators. Educ. Psychol. 50 , 138–166 (2015).

Karbach, J. & Verhaeghen, P. Making working memory work: a meta-analysis of executive-control and working memory training in older adults. Psychol. Sci. 25 , 2027–2037 (2014).

Patel, R., Spreng, R. N. & Turner, G. R. Functional brain changes following cognitive and motor skills training: a quantitative meta-analysis. Neurorehabil Neural Repair 27 , 187–199 (2013).

Carpenter, J. et al. Domain-general enhancements of metacognitive ability through adaptive training. J. Exp. Psychol. 148 , 51–64 (2019).

Baird, B., Mrazek, M. D., Phillips, D. T. & Schooler, J. W. Domain-specific enhancement of metacognitive ability following meditation training. J. Exp. Psychol. 143 , 1972 (2014).

Winne, P. H. & Perry, N. E. Measuring self-regulated learning. In Handbook of Self-Regulation (eds Boekaerts, M., Pintrich, P. R. & Zeidner, M.) Ch. 16, 531–566 (Academic Press, 2000).

Zimmerman, B. J. & Martinez-Pons, M. Development of a structured interview for assessing student use of self-regulated learning strategies. Am. Educ. Res. J. 23 , 614–628 (1986).

Park, C. Engaging students in the learning process: the learning journal. J. Geogr. High. Educ. 27 , 183–199 (2003).

Article   CAS   Google Scholar  

Harrison, G. M. & Vallin, L. M. Evaluating the metacognitive awareness inventory using empirical factor-structure evidence. Metacogn. Learn. 13 , 15–38 (2018).

Pintrich, P. R., Smith, D. A. F., Garcia, T. & Mckeachie, W. J. Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educ. Psychol. Meas. 53 , 801–813 (1993).

Prevatt, F., Petscher, Y., Proctor, B. E., Hurst, A. & Adams, K. The revised Learning and Study Strategies Inventory: an evaluation of competing models. Educ. Psychol. Meas. 66 , 448–458 (2006).

Baggetta, P. & Alexander, P. A. Conceptualization and operationalization of executive function. Mind Brain Educ. 10 , 10–33 (2016).

Gioia, G. A., Isquith, P. K., Guy, S. C. & Kenworthy, L. Test review behavior rating inventory of executive function. Child Neuropsychol. 6 , 235–238 (2000).

Ohtani, K. & Hisasaka, T. Beyond intelligence: a meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacogn. Learn. 13 , 179–212 (2018).

Dianovsky, M. T. & Wink, D. J. Student learning through journal writing in a general education chemistry course for pre-elementary education majors. Sci. Educ. 96 , 543–565 (2012).

Veenman, M. V. J., Van Hout-Wolters, B. H. A. M. & Afflerbach, P. Metacognition and learning: conceptual and methodological considerations. Metacogn Learn. 1 , 3–14 (2006).

Weil, L. G. et al. The development of metacognitive ability in adolescence. Conscious Cogn. 22 , 264–271 (2013).

Veenman, M. & Spaans, M. A. Relation between intellectual and metacognitive skills: Age and task differences. Learn. Individ. Differ. 15 , 159–176 (2005).

Verbert, K. et al. Learning dashboards: an overview and future research opportunities. Personal. Ubiquitous Comput. 18 , 1499–1514 (2014).

Dignath, C. & Büttner, G. Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacogn. Learn. 3 , 231–264 (2008).

Hattie, J., Biggs, J. & Purdie, N. Effects of learning skills interventions on student learning: a meta-analysis. Rev. Educ. Res. 66 , 99–136 (1996).

Zohar, A. & Barzilai, S. A review of research on metacognition in science education: current and future directions. Stud. Sci. Educ. 49 , 121–169 (2013).

Berthold, K., Nückles, M. & Renkl, A. Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learn. Instr. 17 , 564–577 (2007).

Bannert, M. & Mengelkamp, C. Scaffolding hypermedia learning through metacognitive prompts. In International Handbook of Metacognition and Learning Technologies Vol. 28 (eds Azevedo, R. & Aleven, V.) 171–186 (Springer New York, 2013).

Bannert, M., Sonnenberg, C., Mengelkamp, C. & Pieger, E. Short- and long-term effects of students’ self-directed metacognitive prompts on navigation behavior and learning performance. Comput. Hum. Behav. 52 , 293–306 (2015).

McCrindle, A. R. & Christensen, C. A. The impact of learning journals on metacognitive and cognitive processes and learning performance. Learn. Instr. 5 , 167–185 (1995).

Connor-Greene, P. A. Making connections: evaluating the effectiveness of journal writing in enhancing student learning. Teach. Psychol. 27 , 44–46 (2000).

Wong, B. Y. L., Kuperis, S., Jamieson, D., Keller, L. & Cull-Hewitt, R. Effects of guided journal writing on students’ story understanding. J. Educ. Res. 95 , 179–191 (2002).

Nückles, M., Schwonke, R., Berthold, K. & Renkl, A. The use of public learning diaries in blended learning. J. Educ. Media 29 , 49–66 (2004).

Cantrell, R. J., Fusaro, J. A. & Dougherty, E. A. Exploring the effectiveness of journal writing on learning social studies: a comparative study. Read. Psychol. 21 , 1–11 (2000).

Blair, C. Executive function and early childhood education. Curr. Opin. Behav. Sci. 10 , 102–107 (2016).

Clements, D. H., Sarama, J., Unlu, F. & Layzer, C. The Efficacy of an Intervention Synthesizing Scaffolding Designed to Promote Self-Regulation with an Early Mathematics Curriculum: Effects on Executive Function (Society for Research on Educational Effectiveness, 2012).

Newman, S. D., Carpenter, P. A., Varma, S. & Just, M. A. Frontal and parietal participation in problem solving in the Tower of London: fMRI and computational modeling of planning and high-level perception. Neuropsychologia 41 , 1668–1682 (2003).

Sedlmeier, P. et al. The psychological effects of meditation: a meta-analysis. Psychol. Bull. 138 , 1139 (2012).

Bellon, E., Fias, W., Ansari, D. & Smedt, B. D. The neural basis of metacognitive monitoring during arithmetic in the developing brain. Hum. Brain Mapp. 41 , 4562–4573 (2020).

Download references

Acknowledgements

We would like to thank the University of Amsterdam for supporting this research through the Interdisciplinary Doctorate Agreement grant. W.v.d.B. is further supported by the Jacobs Foundation, European Research Council (grant no. ERC-2018-StG-803338), the European Union Horizon 2020 research and innovation programme (grant no. DiGYMATEX-870578), and the Netherlands Organization for Scientific Research (grant no. NWO-VIDI 016.Vidi.185.068).

Author information

Authors and affiliations.

Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands

Damien S. Fleur & Bert Bredeweg

Departement of Psychology, University of Amsterdam, Amsterdam, the Netherlands

Damien S. Fleur & Wouter van den Bos

Faculty of Education, Amsterdam University of Applied Sciences, Amsterdam, the Netherlands

Bert Bredeweg

Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany

Wouter van den Bos

You can also search for this author in PubMed   Google Scholar

Contributions

D.S.F., B.B. and W.v.d.B. conceived the main conceptual idea of this review article. D.S.F. wrote the manuscript with inputs from and under the supervision of B.B. and W.v.d.B.

Corresponding author

Correspondence to Damien S. Fleur .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary materials, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fleur, D.S., Bredeweg, B. & van den Bos, W. Metacognition: ideas and insights from neuro- and educational sciences. npj Sci. Learn. 6 , 13 (2021). https://doi.org/10.1038/s41539-021-00089-5

Download citation

Received : 06 October 2020

Accepted : 09 April 2021

Published : 08 June 2021

DOI : https://doi.org/10.1038/s41539-021-00089-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Relation of life sciences students’ metacognitive monitoring to neural activity during biology error detection.

  • Mei Grace Behrendt
  • Carrie Clark
  • Joseph Dauer

npj Science of Learning (2024)

Towards a common conceptual space for metacognition in perception and memory

  • Audrey Mazancieux
  • Michael Pereira
  • Céline Souchay

Nature Reviews Psychology (2023)

Predictive Validity of Performance-Based Metacognitive Testing is Superior to Self-report: Evidence from Undergraduate Freshman Students

  • Marcio Alexander Castillo-Diaz
  • Cristiano Mauro Assis Gomes

Trends in Psychology (2023)

The many facets of metacognition: comparing multiple measures of metacognition in healthy individuals

  • Anneke Terneusen
  • Conny Quaedflieg
  • Ieke Winkens

Metacognition and Learning (2023)

Normative data and standardization of an international protocol for the evaluation of metacognition in Spanish-speaking university students: A cross-cultural analysis

  • Antonio P. Gutierrez de Blume
  • Diana Marcela Montoya Londoño
  • Jesus Rivera-Sanchez

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

neuroscience of critical thinking

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • CBE Life Sci Educ
  • v.17(1); Spring 2018

Understanding the Complex Relationship between Critical Thinking and Science Reasoning among Undergraduate Thesis Writers

Jason e. dowd.

† Department of Biology, Duke University, Durham, NC 27708

Robert J. Thompson, Jr.

‡ Department of Psychology and Neuroscience, Duke University, Durham, NC 27708

Leslie A. Schiff

§ Department of Microbiology and Immunology, University of Minnesota, Minneapolis, MN 55455

Julie A. Reynolds

Associated data.

This study empirically examines the relationship between students’ critical-thinking skills and scientific reasoning as reflected in undergraduate thesis writing in biology. Writing offers a unique window into studying this relationship, and the findings raise potential implications for instruction.

Developing critical-thinking and scientific reasoning skills are core learning objectives of science education, but little empirical evidence exists regarding the interrelationships between these constructs. Writing effectively fosters students’ development of these constructs, and it offers a unique window into studying how they relate. In this study of undergraduate thesis writing in biology at two universities, we examine how scientific reasoning exhibited in writing (assessed using the Biology Thesis Assessment Protocol) relates to general and specific critical-thinking skills (assessed using the California Critical Thinking Skills Test), and we consider implications for instruction. We find that scientific reasoning in writing is strongly related to inference , while other aspects of science reasoning that emerge in writing (epistemological considerations, writing conventions, etc.) are not significantly related to critical-thinking skills. Science reasoning in writing is not merely a proxy for critical thinking. In linking features of students’ writing to their critical-thinking skills, this study 1) provides a bridge to prior work suggesting that engagement in science writing enhances critical thinking and 2) serves as a foundational step for subsequently determining whether instruction focused explicitly on developing critical-thinking skills (particularly inference ) can actually improve students’ scientific reasoning in their writing.

INTRODUCTION

Critical-thinking and scientific reasoning skills are core learning objectives of science education for all students, regardless of whether or not they intend to pursue a career in science or engineering. Consistent with the view of learning as construction of understanding and meaning ( National Research Council, 2000 ), the pedagogical practice of writing has been found to be effective not only in fostering the development of students’ conceptual and procedural knowledge ( Gerdeman et al. , 2007 ) and communication skills ( Clase et al. , 2010 ), but also scientific reasoning ( Reynolds et al. , 2012 ) and critical-thinking skills ( Quitadamo and Kurtz, 2007 ).

Critical thinking and scientific reasoning are similar but different constructs that include various types of higher-order cognitive processes, metacognitive strategies, and dispositions involved in making meaning of information. Critical thinking is generally understood as the broader construct ( Holyoak and Morrison, 2005 ), comprising an array of cognitive processes and dispostions that are drawn upon differentially in everyday life and across domains of inquiry such as the natural sciences, social sciences, and humanities. Scientific reasoning, then, may be interpreted as the subset of critical-thinking skills (cognitive and metacognitive processes and dispositions) that 1) are involved in making meaning of information in scientific domains and 2) support the epistemological commitment to scientific methodology and paradigm(s).

Although there has been an enduring focus in higher education on promoting critical thinking and reasoning as general or “transferable” skills, research evidence provides increasing support for the view that reasoning and critical thinking are also situational or domain specific ( Beyer et al. , 2013 ). Some researchers, such as Lawson (2010) , present frameworks in which science reasoning is characterized explicitly in terms of critical-thinking skills. There are, however, limited coherent frameworks and empirical evidence regarding either the general or domain-specific interrelationships of scientific reasoning, as it is most broadly defined, and critical-thinking skills.

The Vision and Change in Undergraduate Biology Education Initiative provides a framework for thinking about these constructs and their interrelationship in the context of the core competencies and disciplinary practice they describe ( American Association for the Advancement of Science, 2011 ). These learning objectives aim for undergraduates to “understand the process of science, the interdisciplinary nature of the new biology and how science is closely integrated within society; be competent in communication and collaboration; have quantitative competency and a basic ability to interpret data; and have some experience with modeling, simulation and computational and systems level approaches as well as with using large databases” ( Woodin et al. , 2010 , pp. 71–72). This framework makes clear that science reasoning and critical-thinking skills play key roles in major learning outcomes; for example, “understanding the process of science” requires students to engage in (and be metacognitive about) scientific reasoning, and having the “ability to interpret data” requires critical-thinking skills. To help students better achieve these core competencies, we must better understand the interrelationships of their composite parts. Thus, the next step is to determine which specific critical-thinking skills are drawn upon when students engage in science reasoning in general and with regard to the particular scientific domain being studied. Such a determination could be applied to improve science education for both majors and nonmajors through pedagogical approaches that foster critical-thinking skills that are most relevant to science reasoning.

Writing affords one of the most effective means for making thinking visible ( Reynolds et al. , 2012 ) and learning how to “think like” and “write like” disciplinary experts ( Meizlish et al. , 2013 ). As a result, student writing affords the opportunities to both foster and examine the interrelationship of scientific reasoning and critical-thinking skills within and across disciplinary contexts. The purpose of this study was to better understand the relationship between students’ critical-thinking skills and scientific reasoning skills as reflected in the genre of undergraduate thesis writing in biology departments at two research universities, the University of Minnesota and Duke University.

In the following subsections, we discuss in greater detail the constructs of scientific reasoning and critical thinking, as well as the assessment of scientific reasoning in students’ thesis writing. In subsequent sections, we discuss our study design, findings, and the implications for enhancing educational practices.

Critical Thinking

The advances in cognitive science in the 21st century have increased our understanding of the mental processes involved in thinking and reasoning, as well as memory, learning, and problem solving. Critical thinking is understood to include both a cognitive dimension and a disposition dimension (e.g., reflective thinking) and is defined as “purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considera­tions upon which that judgment is based” ( Facione, 1990, p. 3 ). Although various other definitions of critical thinking have been proposed, researchers have generally coalesced on this consensus: expert view ( Blattner and Frazier, 2002 ; Condon and Kelly-Riley, 2004 ; Bissell and Lemons, 2006 ; Quitadamo and Kurtz, 2007 ) and the corresponding measures of critical-­thinking skills ( August, 2016 ; Stephenson and Sadler-McKnight, 2016 ).

Both the cognitive skills and dispositional components of critical thinking have been recognized as important to science education ( Quitadamo and Kurtz, 2007 ). Empirical research demonstrates that specific pedagogical practices in science courses are effective in fostering students’ critical-thinking skills. Quitadamo and Kurtz (2007) found that students who engaged in a laboratory writing component in the context of a general education biology course significantly improved their overall critical-thinking skills (and their analytical and inference skills, in particular), whereas students engaged in a traditional quiz-based laboratory did not improve their critical-thinking skills. In related work, Quitadamo et al. (2008) found that a community-based inquiry experience, involving inquiry, writing, research, and analysis, was associated with improved critical thinking in a biology course for nonmajors, compared with traditionally taught sections. In both studies, students who exhibited stronger presemester critical-thinking skills exhibited stronger gains, suggesting that “students who have not been explicitly taught how to think critically may not reach the same potential as peers who have been taught these skills” ( Quitadamo and Kurtz, 2007 , p. 151).

Recently, Stephenson and Sadler-McKnight (2016) found that first-year general chemistry students who engaged in a science writing heuristic laboratory, which is an inquiry-based, writing-to-learn approach to instruction ( Hand and Keys, 1999 ), had significantly greater gains in total critical-thinking scores than students who received traditional laboratory instruction. Each of the four components—inquiry, writing, collaboration, and reflection—have been linked to critical thinking ( Stephenson and Sadler-McKnight, 2016 ). Like the other studies, this work highlights the value of targeting critical-thinking skills and the effectiveness of an inquiry-based, writing-to-learn approach to enhance critical thinking. Across studies, authors advocate adopting critical thinking as the course framework ( Pukkila, 2004 ) and developing explicit examples of how critical thinking relates to the scientific method ( Miri et al. , 2007 ).

In these examples, the important connection between writing and critical thinking is highlighted by the fact that each intervention involves the incorporation of writing into science, technology, engineering, and mathematics education (either alone or in combination with other pedagogical practices). However, critical-thinking skills are not always the primary learning outcome; in some contexts, scientific reasoning is the primary outcome that is assessed.

Scientific Reasoning

Scientific reasoning is a complex process that is broadly defined as “the skills involved in inquiry, experimentation, evidence evaluation, and inference that are done in the service of conceptual change or scientific understanding” ( Zimmerman, 2007 , p. 172). Scientific reasoning is understood to include both conceptual knowledge and the cognitive processes involved with generation of hypotheses (i.e., inductive processes involved in the generation of hypotheses and the deductive processes used in the testing of hypotheses), experimentation strategies, and evidence evaluation strategies. These dimensions are interrelated, in that “experimentation and inference strategies are selected based on prior conceptual knowledge of the domain” ( Zimmerman, 2000 , p. 139). Furthermore, conceptual and procedural knowledge and cognitive process dimensions can be general and domain specific (or discipline specific).

With regard to conceptual knowledge, attention has been focused on the acquisition of core methodological concepts fundamental to scientists’ causal reasoning and metacognitive distancing (or decontextualized thinking), which is the ability to reason independently of prior knowledge or beliefs ( Greenhoot et al. , 2004 ). The latter involves what Kuhn and Dean (2004) refer to as the coordination of theory and evidence, which requires that one question existing theories (i.e., prior knowledge and beliefs), seek contradictory evidence, eliminate alternative explanations, and revise one’s prior beliefs in the face of contradictory evidence. Kuhn and colleagues (2008) further elaborate that scientific thinking requires “a mature understanding of the epistemological foundations of science, recognizing scientific knowledge as constructed by humans rather than simply discovered in the world,” and “the ability to engage in skilled argumentation in the scientific domain, with an appreciation of argumentation as entailing the coordination of theory and evidence” ( Kuhn et al. , 2008 , p. 435). “This approach to scientific reasoning not only highlights the skills of generating and evaluating evidence-based inferences, but also encompasses epistemological appreciation of the functions of evidence and theory” ( Ding et al. , 2016 , p. 616). Evaluating evidence-based inferences involves epistemic cognition, which Moshman (2015) defines as the subset of metacognition that is concerned with justification, truth, and associated forms of reasoning. Epistemic cognition is both general and domain specific (or discipline specific; Moshman, 2015 ).

There is empirical support for the contributions of both prior knowledge and an understanding of the epistemological foundations of science to scientific reasoning. In a study of undergraduate science students, advanced scientific reasoning was most often accompanied by accurate prior knowledge as well as sophisticated epistemological commitments; additionally, for students who had comparable levels of prior knowledge, skillful reasoning was associated with a strong epistemological commitment to the consistency of theory with evidence ( Zeineddin and Abd-El-Khalick, 2010 ). These findings highlight the importance of the need for instructional activities that intentionally help learners develop sophisticated epistemological commitments focused on the nature of knowledge and the role of evidence in supporting knowledge claims ( Zeineddin and Abd-El-Khalick, 2010 ).

Scientific Reasoning in Students’ Thesis Writing

Pedagogical approaches that incorporate writing have also focused on enhancing scientific reasoning. Many rubrics have been developed to assess aspects of scientific reasoning in written artifacts. For example, Timmerman and colleagues (2011) , in the course of describing their own rubric for assessing scientific reasoning, highlight several examples of scientific reasoning assessment criteria ( Haaga, 1993 ; Tariq et al. , 1998 ; Topping et al. , 2000 ; Kelly and Takao, 2002 ; Halonen et al. , 2003 ; Willison and O’Regan, 2007 ).

At both the University of Minnesota and Duke University, we have focused on the genre of the undergraduate honors thesis as the rhetorical context in which to study and improve students’ scientific reasoning and writing. We view the process of writing an undergraduate honors thesis as a form of professional development in the sciences (i.e., a way of engaging students in the practices of a community of discourse). We have found that structured courses designed to scaffold the thesis-­writing process and promote metacognition can improve writing and reasoning skills in biology, chemistry, and economics ( Reynolds and Thompson, 2011 ; Dowd et al. , 2015a , b ). In the context of this prior work, we have defined scientific reasoning in writing as the emergent, underlying construct measured across distinct aspects of students’ written discussion of independent research in their undergraduate theses.

The Biology Thesis Assessment Protocol (BioTAP) was developed at Duke University as a tool for systematically guiding students and faculty through a “draft–feedback–revision” writing process, modeled after professional scientific peer-review processes ( Reynolds et al. , 2009 ). BioTAP includes activities and worksheets that allow students to engage in critical peer review and provides detailed descriptions, presented as rubrics, of the questions (i.e., dimensions, shown in Table 1 ) upon which such review should focus. Nine rubric dimensions focus on communication to the broader scientific community, and four rubric dimensions focus on the accuracy and appropriateness of the research. These rubric dimensions provide criteria by which the thesis is assessed, and therefore allow BioTAP to be used as an assessment tool as well as a teaching resource ( Reynolds et al. , 2009 ). Full details are available at www.science-writing.org/biotap.html .

Theses assessment protocol dimensions

In previous work, we have used BioTAP to quantitatively assess students’ undergraduate honors theses and explore the relationship between thesis-writing courses (or specific interventions within the courses) and the strength of students’ science reasoning in writing across different science disciplines: biology ( Reynolds and Thompson, 2011 ); chemistry ( Dowd et al. , 2015b ); and economics ( Dowd et al. , 2015a ). We have focused exclusively on the nine dimensions related to reasoning and writing (questions 1–9), as the other four dimensions (questions 10–13) require topic-specific expertise and are intended to be used by the student’s thesis supervisor.

Beyond considering individual dimensions, we have investigated whether meaningful constructs underlie students’ thesis scores. We conducted exploratory factor analysis of students’ theses in biology, economics, and chemistry and found one dominant underlying factor in each discipline; we termed the factor “scientific reasoning in writing” ( Dowd et al. , 2015a , b , 2016 ). That is, each of the nine dimensions could be understood as reflecting, in different ways and to different degrees, the construct of scientific reasoning in writing. The findings indicated evidence of both general and discipline-specific components to scientific reasoning in writing that relate to epistemic beliefs and paradigms, in keeping with broader ideas about science reasoning discussed earlier. Specifically, scientific reasoning in writing is more strongly associated with formulating a compelling argument for the significance of the research in the context of current literature in biology, making meaning regarding the implications of the findings in chemistry, and providing an organizational framework for interpreting the thesis in economics. We suggested that instruction, whether occurring in writing studios or in writing courses to facilitate thesis preparation, should attend to both components.

Research Question and Study Design

The genre of thesis writing combines the pedagogies of writing and inquiry found to foster scientific reasoning ( Reynolds et al. , 2012 ) and critical thinking ( Quitadamo and Kurtz, 2007 ; Quitadamo et al. , 2008 ; Stephenson and Sadler-­McKnight, 2016 ). However, there is no empirical evidence regarding the general or domain-specific interrelationships of scientific reasoning and critical-thinking skills, particularly in the rhetorical context of the undergraduate thesis. The BioTAP studies discussed earlier indicate that the rubric-based assessment produces evidence of scientific reasoning in the undergraduate thesis, but it was not designed to foster or measure critical thinking. The current study was undertaken to address the research question: How are students’ critical-thinking skills related to scientific reasoning as reflected in the genre of undergraduate thesis writing in biology? Determining these interrelationships could guide efforts to enhance students’ scientific reasoning and writing skills through focusing instruction on specific critical-thinking skills as well as disciplinary conventions.

To address this research question, we focused on undergraduate thesis writers in biology courses at two institutions, Duke University and the University of Minnesota, and examined the extent to which students’ scientific reasoning in writing, assessed in the undergraduate thesis using BioTAP, corresponds to students’ critical-thinking skills, assessed using the California Critical Thinking Skills Test (CCTST; August, 2016 ).

Study Sample

The study sample was composed of students enrolled in courses designed to scaffold the thesis-writing process in the Department of Biology at Duke University and the College of Biological Sciences at the University of Minnesota. Both courses complement students’ individual work with research advisors. The course is required for thesis writers at the University of Minnesota and optional for writers at Duke University. Not all students are required to complete a thesis, though it is required for students to graduate with honors; at the University of Minnesota, such students are enrolled in an honors program within the college. In total, 28 students were enrolled in the course at Duke University and 44 students were enrolled in the course at the University of Minnesota. Of those students, two students did not consent to participate in the study; additionally, five students did not validly complete the CCTST (i.e., attempted fewer than 60% of items or completed the test in less than 15 minutes). Thus, our overall rate of valid participation is 90%, with 27 students from Duke University and 38 students from the University of Minnesota. We found no statistically significant differences in thesis assessment between students with valid CCTST scores and invalid CCTST scores. Therefore, we focus on the 65 students who consented to participate and for whom we have complete and valid data in most of this study. Additionally, in asking students for their consent to participate, we allowed them to choose whether to provide or decline access to academic and demographic background data. Of the 65 students who consented to participate, 52 students granted access to such data. Therefore, for additional analyses involving academic and background data, we focus on the 52 students who consented. We note that the 13 students who participated but declined to share additional data performed slightly lower on the CCTST than the 52 others (perhaps suggesting that they differ by other measures, but we cannot determine this with certainty). Among the 52 students, 60% identified as female and 10% identified as being from underrepresented ethnicities.

In both courses, students completed the CCTST online, either in class or on their own, late in the Spring 2016 semester. This is the same assessment that was used in prior studies of critical thinking ( Quitadamo and Kurtz, 2007 ; Quitadamo et al. , 2008 ; Stephenson and Sadler-McKnight, 2016 ). It is “an objective measure of the core reasoning skills needed for reflective decision making concerning what to believe or what to do” ( Insight Assessment, 2016a ). In the test, students are asked to read and consider information as they answer multiple-choice questions. The questions are intended to be appropriate for all users, so there is no expectation of prior disciplinary knowledge in biology (or any other subject). Although actual test items are protected, sample items are available on the Insight Assessment website ( Insight Assessment, 2016b ). We have included one sample item in the Supplemental Material.

The CCTST is based on a consensus definition of critical thinking, measures cognitive and metacognitive skills associated with critical thinking, and has been evaluated for validity and reliability at the college level ( August, 2016 ; Stephenson and Sadler-McKnight, 2016 ). In addition to providing overall critical-thinking score, the CCTST assesses seven dimensions of critical thinking: analysis, interpretation, inference, evaluation, explanation, induction, and deduction. Scores on each dimension are calculated based on students’ performance on items related to that dimension. Analysis focuses on identifying assumptions, reasons, and claims and examining how they interact to form arguments. Interpretation, related to analysis, focuses on determining the precise meaning and significance of information. Inference focuses on drawing conclusions from reasons and evidence. Evaluation focuses on assessing the credibility of sources of information and claims they make. Explanation, related to evaluation, focuses on describing the evidence, assumptions, or rationale for beliefs and conclusions. Induction focuses on drawing inferences about what is probably true based on evidence. Deduction focuses on drawing conclusions about what must be true when the context completely determines the outcome. These are not independent dimensions; the fact that they are related supports their collective interpretation as critical thinking. Together, the CCTST dimensions provide a basis for evaluating students’ overall strength in using reasoning to form reflective judgments about what to believe or what to do ( August, 2016 ). Each of the seven dimensions and the overall CCTST score are measured on a scale of 0–100, where higher scores indicate superior performance. Scores correspond to superior (86–100), strong (79–85), moderate (70–78), weak (63–69), or not manifested (62 and below) skills.

Scientific Reasoning in Writing

At the end of the semester, students’ final, submitted undergraduate theses were assessed using BioTAP, which consists of nine rubric dimensions that focus on communication to the broader scientific community and four additional dimensions that focus on the exhibition of topic-specific expertise ( Reynolds et al. , 2009 ). These dimensions, framed as questions, are displayed in Table 1 .

Student theses were assessed on questions 1–9 of BioTAP using the same procedures described in previous studies ( Reynolds and Thompson, 2011 ; Dowd et al. , 2015a , b ). In this study, six raters were trained in the valid, reliable use of BioTAP rubrics. Each dimension was rated on a five-point scale: 1 indicates the dimension is missing, incomplete, or below acceptable standards; 3 indicates that the dimension is adequate but not exhibiting mastery; and 5 indicates that the dimension is excellent and exhibits mastery (intermediate ratings of 2 and 4 are appropriate when different parts of the thesis make a single category challenging). After training, two raters independently assessed each thesis and then discussed their independent ratings with one another to form a consensus rating. The consensus score is not an average score, but rather an agreed-upon, discussion-based score. On a five-point scale, raters independently assessed dimensions to be within 1 point of each other 82.4% of the time before discussion and formed consensus ratings 100% of the time after discussion.

In this study, we consider both categorical (mastery/nonmastery, where a score of 5 corresponds to mastery) and numerical treatments of individual BioTAP scores to better relate the manifestation of critical thinking in BioTAP assessment to all of the prior studies. For comprehensive/cumulative measures of BioTAP, we focus on the partial sum of questions 1–5, as these questions relate to higher-order scientific reasoning (whereas questions 6–9 relate to mid- and lower-order writing mechanics [ Reynolds et al. , 2009 ]), and the factor scores (i.e., numerical representations of the extent to which each student exhibits the underlying factor), which are calculated from the factor loadings published by Dowd et al. (2016) . We do not focus on questions 6–9 individually in statistical analyses, because we do not expect critical-thinking skills to relate to mid- and lower-order writing skills.

The final, submitted thesis reflects the student’s writing, the student’s scientific reasoning, the quality of feedback provided to the student by peers and mentors, and the student’s ability to incorporate that feedback into his or her work. Therefore, our assessment is not the same as an assessment of unpolished, unrevised samples of students’ written work. While one might imagine that such an unpolished sample may be more strongly correlated with critical-thinking skills measured by the CCTST, we argue that the complete, submitted thesis, assessed using BioTAP, is ultimately a more appropriate reflection of how students exhibit science reasoning in the scientific community.

Statistical Analyses

We took several steps to analyze the collected data. First, to provide context for subsequent interpretations, we generated descriptive statistics for the CCTST scores of the participants based on the norms for undergraduate CCTST test takers. To determine the strength of relationships among CCTST dimensions (including overall score) and the BioTAP dimensions, partial-sum score (questions 1–5), and factor score, we calculated Pearson’s correlations for each pair of measures. To examine whether falling on one side of the nonmastery/mastery threshold (as opposed to a linear scale of performance) was related to critical thinking, we grouped BioTAP dimensions into categories (mastery/nonmastery) and conducted Student’s t tests to compare the means scores of the two groups on each of the seven dimensions and overall score of the CCTST. Finally, for the strongest relationship that emerged, we included additional academic and background variables as covariates in multiple linear-regression analysis to explore questions about how much observed relationships between critical-thinking skills and science reasoning in writing might be explained by variation in these other factors.

Although BioTAP scores represent discreet, ordinal bins, the five-point scale is intended to capture an underlying continuous construct (from inadequate to exhibiting mastery). It has been argued that five categories is an appropriate cutoff for treating ordinal variables as pseudo-continuous ( Rhemtulla et al. , 2012 )—and therefore using continuous-variable statistical methods (e.g., Pearson’s correlations)—as long as the underlying assumption that ordinal scores are linearly distributed is valid. Although we have no way to statistically test this assumption, we interpret adequate scores to be approximately halfway between inadequate and mastery scores, resulting in a linear scale. In part because this assumption is subject to disagreement, we also consider and interpret a categorical (mastery/nonmastery) treatment of BioTAP variables.

We corrected for multiple comparisons using the Holm-Bonferroni method ( Holm, 1979 ). At the most general level, where we consider the single, comprehensive measures for BioTAP (partial-sum and factor score) and the CCTST (overall score), there is no need to correct for multiple comparisons, because the multiple, individual dimensions are collapsed into single dimensions. When we considered individual CCTST dimensions in relation to comprehensive measures for BioTAP, we accounted for seven comparisons; similarly, when we considered individual dimensions of BioTAP in relation to overall CCTST score, we accounted for five comparisons. When all seven CCTST and five BioTAP dimensions were examined individually and without prior knowledge, we accounted for 35 comparisons; such a rigorous threshold is likely to reject weak and moderate relationships, but it is appropriate if there are no specific pre-existing hypotheses. All p values are presented in tables for complete transparency, and we carefully consider the implications of our interpretation of these data in the Discussion section.

CCTST scores for students in this sample ranged from the 39th to 99th percentile of the general population of undergraduate CCTST test takers (mean percentile = 84.3, median = 85th percentile; Table 2 ); these percentiles reflect overall scores that range from moderate to superior. Scores on individual dimensions and overall scores were sufficiently normal and far enough from the ceiling of the scale to justify subsequent statistical analyses.

Descriptive statistics of CCTST dimensions a

a Scores correspond to superior (86–100), strong (79–85), moderate (70–78), weak (63–69), or not manifested (62 and lower) skills.

The Pearson’s correlations between students’ cumulative scores on BioTAP (the factor score based on loadings published by Dowd et al. , 2016 , and the partial sum of scores on questions 1–5) and students’ overall scores on the CCTST are presented in Table 3 . We found that the partial-sum measure of BioTAP was significantly related to the overall measure of critical thinking ( r = 0.27, p = 0.03), while the BioTAP factor score was marginally related to overall CCTST ( r = 0.24, p = 0.05). When we looked at relationships between comprehensive BioTAP measures and scores for individual dimensions of the CCTST ( Table 3 ), we found significant positive correlations between the both BioTAP partial-sum and factor scores and CCTST inference ( r = 0.45, p < 0.001, and r = 0.41, p < 0.001, respectively). Although some other relationships have p values below 0.05 (e.g., the correlations between BioTAP partial-sum scores and CCTST induction and interpretation scores), they are not significant when we correct for multiple comparisons.

Correlations between dimensions of CCTST and dimensions of BioTAP a

a In each cell, the top number is the correlation, and the bottom, italicized number is the associated p value. Correlations that are statistically significant after correcting for multiple comparisons are shown in bold.

b This is the partial sum of BioTAP scores on questions 1–5.

c This is the factor score calculated from factor loadings published by Dowd et al. (2016) .

When we expanded comparisons to include all 35 potential correlations among individual BioTAP and CCTST dimensions—and, accordingly, corrected for 35 comparisons—we did not find any additional statistically significant relationships. The Pearson’s correlations between students’ scores on each dimension of BioTAP and students’ scores on each dimension of the CCTST range from −0.11 to 0.35 ( Table 3 ); although the relationship between discussion of implications (BioTAP question 5) and inference appears to be relatively large ( r = 0.35), it is not significant ( p = 0.005; the Holm-Bonferroni cutoff is 0.00143). We found no statistically significant relationships between BioTAP questions 6–9 and CCTST dimensions (unpublished data), regardless of whether we correct for multiple comparisons.

The results of Student’s t tests comparing scores on each dimension of the CCTST of students who exhibit mastery with those of students who do not exhibit mastery on each dimension of BioTAP are presented in Table 4 . Focusing first on the overall CCTST scores, we found that the difference between those who exhibit mastery and those who do not in discussing implications of results (BioTAP question 5) is statistically significant ( t = 2.73, p = 0.008, d = 0.71). When we expanded t tests to include all 35 comparisons—and, like above, corrected for 35 comparisons—we found a significant difference in inference scores between students who exhibit mastery on question 5 and students who do not ( t = 3.41, p = 0.0012, d = 0.88), as well as a marginally significant difference in these students’ induction scores ( t = 3.26, p = 0.0018, d = 0.84; the Holm-Bonferroni cutoff is p = 0.00147). Cohen’s d effect sizes, which reveal the strength of the differences for statistically significant relationships, range from 0.71 to 0.88.

The t statistics and effect sizes of differences in ­dimensions of CCTST across dimensions of BioTAP a

a In each cell, the top number is the t statistic for each comparison, and the middle, italicized number is the associated p value. The bottom number is the effect size. Correlations that are statistically significant after correcting for multiple comparisons are shown in bold.

Finally, we more closely examined the strongest relationship that we observed, which was between the CCTST dimension of inference and the BioTAP partial-sum composite score (shown in Table 3 ), using multiple regression analysis ( Table 5 ). Focusing on the 52 students for whom we have background information, we looked at the simple relationship between BioTAP and inference (model 1), a robust background model including multiple covariates that one might expect to explain some part of the variation in BioTAP (model 2), and a combined model including all variables (model 3). As model 3 shows, the covariates explain very little variation in BioTAP scores, and the relationship between inference and BioTAP persists even in the presence of all of the covariates.

Partial sum (questions 1–5) of BioTAP scores ( n = 52)

** p < 0.01.

*** p < 0.001.

The aim of this study was to examine the extent to which the various components of scientific reasoning—manifested in writing in the genre of undergraduate thesis and assessed using BioTAP—draw on general and specific critical-thinking skills (assessed using CCTST) and to consider the implications for educational practices. Although science reasoning involves critical-thinking skills, it also relates to conceptual knowledge and the epistemological foundations of science disciplines ( Kuhn et al. , 2008 ). Moreover, science reasoning in writing , captured in students’ undergraduate theses, reflects habits, conventions, and the incorporation of feedback that may alter evidence of individuals’ critical-thinking skills. Our findings, however, provide empirical evidence that cumulative measures of science reasoning in writing are nonetheless related to students’ overall critical-thinking skills ( Table 3 ). The particularly significant roles of inference skills ( Table 3 ) and the discussion of implications of results (BioTAP question 5; Table 4 ) provide a basis for more specific ideas about how these constructs relate to one another and what educational interventions may have the most success in fostering these skills.

Our results build on previous findings. The genre of thesis writing combines pedagogies of writing and inquiry found to foster scientific reasoning ( Reynolds et al. , 2012 ) and critical thinking ( Quitadamo and Kurtz, 2007 ; Quitadamo et al. , 2008 ; Stephenson and Sadler-McKnight, 2016 ). Quitadamo and Kurtz (2007) reported that students who engaged in a laboratory writing component in a general education biology course significantly improved their inference and analysis skills, and Quitadamo and colleagues (2008) found that participation in a community-based inquiry biology course (that included a writing component) was associated with significant gains in students’ inference and evaluation skills. The shared focus on inference is noteworthy, because these prior studies actually differ from the current study; the former considered critical-­thinking skills as the primary learning outcome of writing-­focused interventions, whereas the latter focused on emergent links between two learning outcomes (science reasoning in writing and critical thinking). In other words, inference skills are impacted by writing as well as manifested in writing.

Inference focuses on drawing conclusions from argument and evidence. According to the consensus definition of critical thinking, the specific skill of inference includes several processes: querying evidence, conjecturing alternatives, and drawing conclusions. All of these activities are central to the independent research at the core of writing an undergraduate thesis. Indeed, a critical part of what we call “science reasoning in writing” might be characterized as a measure of students’ ability to infer and make meaning of information and findings. Because the cumulative BioTAP measures distill underlying similarities and, to an extent, suppress unique aspects of individual dimensions, we argue that it is appropriate to relate inference to scientific reasoning in writing . Even when we control for other potentially relevant background characteristics, the relationship is strong ( Table 5 ).

In taking the complementary view and focusing on BioTAP, when we compared students who exhibit mastery with those who do not, we found that the specific dimension of “discussing the implications of results” (question 5) differentiates students’ performance on several critical-thinking skills. To achieve mastery on this dimension, students must make connections between their results and other published studies and discuss the future directions of the research; in short, they must demonstrate an understanding of the bigger picture. The specific relationship between question 5 and inference is the strongest observed among all individual comparisons. Altogether, perhaps more than any other BioTAP dimension, this aspect of students’ writing provides a clear view of the role of students’ critical-thinking skills (particularly inference and, marginally, induction) in science reasoning.

While inference and discussion of implications emerge as particularly strongly related dimensions in this work, we note that the strongest contribution to “science reasoning in writing in biology,” as determined through exploratory factor analysis, is “argument for the significance of research” (BioTAP question 2, not question 5; Dowd et al. , 2016 ). Question 2 is not clearly related to critical-thinking skills. These findings are not contradictory, but rather suggest that the epistemological and disciplinary-specific aspects of science reasoning that emerge in writing through BioTAP are not completely aligned with aspects related to critical thinking. In other words, science reasoning in writing is not simply a proxy for those critical-thinking skills that play a role in science reasoning.

In a similar vein, the content-related, epistemological aspects of science reasoning, as well as the conventions associated with writing the undergraduate thesis (including feedback from peers and revision), may explain the lack of significant relationships between some science reasoning dimensions and some critical-thinking skills that might otherwise seem counterintuitive (e.g., BioTAP question 2, which relates to making an argument, and the critical-thinking skill of argument). It is possible that an individual’s critical-thinking skills may explain some variation in a particular BioTAP dimension, but other aspects of science reasoning and practice exert much stronger influence. Although these relationships do not emerge in our analyses, the lack of significant correlation does not mean that there is definitively no correlation. Correcting for multiple comparisons suppresses type 1 error at the expense of exacerbating type 2 error, which, combined with the limited sample size, constrains statistical power and makes weak relationships more difficult to detect. Ultimately, though, the relationships that do emerge highlight places where individuals’ distinct critical-thinking skills emerge most coherently in thesis assessment, which is why we are particularly interested in unpacking those relationships.

We recognize that, because only honors students submit theses at these institutions, this study sample is composed of a selective subset of the larger population of biology majors. Although this is an inherent limitation of focusing on thesis writing, links between our findings and results of other studies (with different populations) suggest that observed relationships may occur more broadly. The goal of improved science reasoning and critical thinking is shared among all biology majors, particularly those engaged in capstone research experiences. So while the implications of this work most directly apply to honors thesis writers, we provisionally suggest that all students could benefit from further study of them.

There are several important implications of this study for science education practices. Students’ inference skills relate to the understanding and effective application of scientific content. The fact that we find no statistically significant relationships between BioTAP questions 6–9 and CCTST dimensions suggests that such mid- to lower-order elements of BioTAP ( Reynolds et al. , 2009 ), which tend to be more structural in nature, do not focus on aspects of the finished thesis that draw strongly on critical thinking. In keeping with prior analyses ( Reynolds and Thompson, 2011 ; Dowd et al. , 2016 ), these findings further reinforce the notion that disciplinary instructors, who are most capable of teaching and assessing scientific reasoning and perhaps least interested in the more mechanical aspects of writing, may nonetheless be best suited to effectively model and assess students’ writing.

The goal of the thesis writing course at both Duke University and the University of Minnesota is not merely to improve thesis scores but to move students’ writing into the category of mastery across BioTAP dimensions. Recognizing that students with differing critical-thinking skills (particularly inference) are more or less likely to achieve mastery in the undergraduate thesis (particularly in discussing implications [question 5]) is important for developing and testing targeted pedagogical interventions to improve learning outcomes for all students.

The competencies characterized by the Vision and Change in Undergraduate Biology Education Initiative provide a general framework for recognizing that science reasoning and critical-thinking skills play key roles in major learning outcomes of science education. Our findings highlight places where science reasoning–related competencies (like “understanding the process of science”) connect to critical-thinking skills and places where critical thinking–related competencies might be manifested in scientific products (such as the ability to discuss implications in scientific writing). We encourage broader efforts to build empirical connections between competencies and pedagogical practices to further improve science education.

One specific implication of this work for science education is to focus on providing opportunities for students to develop their critical-thinking skills (particularly inference). Of course, as this correlational study is not designed to test causality, we do not claim that enhancing students’ inference skills will improve science reasoning in writing. However, as prior work shows that science writing activities influence students’ inference skills ( Quitadamo and Kurtz, 2007 ; Quitadamo et al. , 2008 ), there is reason to test such a hypothesis. Nevertheless, the focus must extend beyond inference as an isolated skill; rather, it is important to relate inference to the foundations of the scientific method ( Miri et al. , 2007 ) in terms of the epistemological appreciation of the functions and coordination of evidence ( Kuhn and Dean, 2004 ; Zeineddin and Abd-El-Khalick, 2010 ; Ding et al. , 2016 ) and disciplinary paradigms of truth and justification ( Moshman, 2015 ).

Although this study is limited to the domain of biology at two institutions with a relatively small number of students, the findings represent a foundational step in the direction of achieving success with more integrated learning outcomes. Hopefully, it will spur greater interest in empirically grounding discussions of the constructs of scientific reasoning and critical-thinking skills.

This study contributes to the efforts to improve science education, for both majors and nonmajors, through an empirically driven analysis of the relationships between scientific reasoning reflected in the genre of thesis writing and critical-thinking skills. This work is rooted in the usefulness of BioTAP as a method 1) to facilitate communication and learning and 2) to assess disciplinary-specific and general dimensions of science reasoning. The findings support the important role of the critical-thinking skill of inference in scientific reasoning in writing, while also highlighting ways in which other aspects of science reasoning (epistemological considerations, writing conventions, etc.) are not significantly related to critical thinking. Future research into the impact of interventions focused on specific critical-thinking skills (i.e., inference) for improved science reasoning in writing will build on this work and its implications for science education.

Supplementary Material

Acknowledgments.

We acknowledge the contributions of Kelaine Haas and Alexander Motten to the implementation and collection of data. We also thank Mine Çetinkaya-­Rundel for her insights regarding our statistical analyses. This research was funded by National Science Foundation award DUE-1525602.

  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action . Washington, DC: Retrieved September 26, 2017, from https://visionandchange.org/files/2013/11/aaas-VISchange-web1113.pdf . [ Google Scholar ]
  • August D. (2016). California Critical Thinking Skills Test user manual and resource guide . San Jose: Insight Assessment/California Academic Press. [ Google Scholar ]
  • Beyer C. H., Taylor E., Gillmore G. M. (2013). Inside the undergraduate teaching experience: The University of Washington’s growth in faculty teaching study . Albany, NY: SUNY Press. [ Google Scholar ]
  • Bissell A. N., Lemons P. P. (2006). A new method for assessing critical thinking in the classroom . BioScience , ( 1 ), 66–72. https://doi.org/10.1641/0006-3568(2006)056[0066:ANMFAC]2.0.CO;2 . [ Google Scholar ]
  • Blattner N. H., Frazier C. L. (2002). Developing a performance-based assessment of students’ critical thinking skills . Assessing Writing , ( 1 ), 47–64. [ Google Scholar ]
  • Clase K. L., Gundlach E., Pelaez N. J. (2010). Calibrated peer review for computer-assisted learning of biological research competencies . Biochemistry and Molecular Biology Education , ( 5 ), 290–295. [ PubMed ] [ Google Scholar ]
  • Condon W., Kelly-Riley D. (2004). Assessing and teaching what we value: The relationship between college-level writing and critical thinking abilities . Assessing Writing , ( 1 ), 56–75. https://doi.org/10.1016/j.asw.2004.01.003 . [ Google Scholar ]
  • Ding L., Wei X., Liu X. (2016). Variations in university students’ scientific reasoning skills across majors, years, and types of institutions . Research in Science Education , ( 5 ), 613–632. https://doi.org/10.1007/s11165-015-9473-y . [ Google Scholar ]
  • Dowd J. E., Connolly M. P., Thompson R. J., Jr., Reynolds J. A. (2015a). Improved reasoning in undergraduate writing through structured workshops . Journal of Economic Education , ( 1 ), 14–27. https://doi.org/10.1080/00220485.2014.978924 . [ Google Scholar ]
  • Dowd J. E., Roy C. P., Thompson R. J., Jr., Reynolds J. A. (2015b). “On course” for supporting expanded participation and improving scientific reasoning in undergraduate thesis writing . Journal of Chemical Education , ( 1 ), 39–45. https://doi.org/10.1021/ed500298r . [ Google Scholar ]
  • Dowd J. E., Thompson R. J., Jr., Reynolds J. A. (2016). Quantitative genre analysis of undergraduate theses: Uncovering different ways of writing and thinking in science disciplines . WAC Journal , , 36–51. [ Google Scholar ]
  • Facione P. A. (1990). Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations . Newark, DE: American Philosophical Association; Retrieved September 26, 2017, from https://philpapers.org/archive/FACCTA.pdf . [ Google Scholar ]
  • Gerdeman R. D., Russell A. A., Worden K. J., Gerdeman R. D., Russell A. A., Worden K. J. (2007). Web-based student writing and reviewing in a large biology lecture course . Journal of College Science Teaching , ( 5 ), 46–52. [ Google Scholar ]
  • Greenhoot A. F., Semb G., Colombo J., Schreiber T. (2004). Prior beliefs and methodological concepts in scientific reasoning . Applied Cognitive Psychology , ( 2 ), 203–221. https://doi.org/10.1002/acp.959 . [ Google Scholar ]
  • Haaga D. A. F. (1993). Peer review of term papers in graduate psychology courses . Teaching of Psychology , ( 1 ), 28–32. https://doi.org/10.1207/s15328023top2001_5 . [ Google Scholar ]
  • Halonen J. S., Bosack T., Clay S., McCarthy M., Dunn D. S., Hill G. W., Whitlock K. (2003). A rubric for learning, teaching, and assessing scientific inquiry in psychology . Teaching of Psychology , ( 3 ), 196–208. https://doi.org/10.1207/S15328023TOP3003_01 . [ Google Scholar ]
  • Hand B., Keys C. W. (1999). Inquiry investigation . Science Teacher , ( 4 ), 27–29. [ Google Scholar ]
  • Holm S. (1979). A simple sequentially rejective multiple test procedure . Scandinavian Journal of Statistics , ( 2 ), 65–70. [ Google Scholar ]
  • Holyoak K. J., Morrison R. G. (2005). The Cambridge handbook of thinking and reasoning . New York: Cambridge University Press. [ Google Scholar ]
  • Insight Assessment. (2016a). California Critical Thinking Skills Test (CCTST) Retrieved September 26, 2017, from www.insightassessment.com/Products/Products-Summary/Critical-Thinking-Skills-Tests/California-Critical-Thinking-Skills-Test-CCTST .
  • Insight Assessment. (2016b). Sample thinking skills questions. Retrieved September 26, 2017, from www.insightassessment.com/Resources/Teaching-Training-and-Learning-Tools/node_1487 .
  • Kelly G. J., Takao A. (2002). Epistemic levels in argument: An analysis of university oceanography students’ use of evidence in writing . Science Education , ( 3 ), 314–342. https://doi.org/10.1002/sce.10024 . [ Google Scholar ]
  • Kuhn D., Dean D., Jr. (2004). Connecting scientific reasoning and causal inference . Journal of Cognition and Development , ( 2 ), 261–288. https://doi.org/10.1207/s15327647jcd0502_5 . [ Google Scholar ]
  • Kuhn D., Iordanou K., Pease M., Wirkala C. (2008). Beyond control of variables: What needs to develop to achieve skilled scientific thinking? . Cognitive Development , ( 4 ), 435–451. https://doi.org/10.1016/j.cogdev.2008.09.006 . [ Google Scholar ]
  • Lawson A. E. (2010). Basic inferences of scientific reasoning, argumentation, and discovery . Science Education , ( 2 ), 336–364. https://doi.org/­10.1002/sce.20357 . [ Google Scholar ]
  • Meizlish D., LaVaque-Manty D., Silver N., Kaplan M. (2013). Think like/write like: Metacognitive strategies to foster students’ development as disciplinary thinkers and writers . In Thompson R. J. (Ed.), Changing the conversation about higher education (pp. 53–73). Lanham, MD: Rowman & Littlefield. [ Google Scholar ]
  • Miri B., David B.-C., Uri Z. (2007). Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking . Research in Science Education , ( 4 ), 353–369. https://doi.org/10.1007/s11165-006-9029-2 . [ Google Scholar ]
  • Moshman D. (2015). Epistemic cognition and development: The psychology of justification and truth . New York: Psychology Press. [ Google Scholar ]
  • National Research Council. (2000). How people learn: Brain, mind, experience, and school . Expanded ed. Washington, DC: National Academies Press. [ Google Scholar ]
  • Pukkila P. J. (2004). Introducing student inquiry in large introductory genetics classes . Genetics , ( 1 ), 11–18. https://doi.org/10.1534/genetics.166.1.11 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Quitadamo I. J., Faiola C. L., Johnson J. E., Kurtz M. J. (2008). Community-based inquiry improves critical thinking in general education biology . CBE—Life Sciences Education , ( 3 ), 327–337. https://doi.org/10.1187/cbe.07-11-0097 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Quitadamo I. J., Kurtz M. J. (2007). Learning to improve: Using writing to increase critical thinking performance in general education biology . CBE—Life Sciences Education , ( 2 ), 140–154. https://doi.org/10.1187/cbe.06-11-0203 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds J. A., Smith R., Moskovitz C., Sayle A. (2009). BioTAP: A systematic approach to teaching scientific writing and evaluating undergraduate theses . BioScience , ( 10 ), 896–903. https://doi.org/10.1525/bio.2009.59.10.11 . [ Google Scholar ]
  • Reynolds J. A., Thaiss C., Katkin W., Thompson R. J. (2012). Writing-to-learn in undergraduate science education: A community-based, conceptually driven approach . CBE—Life Sciences Education , ( 1 ), 17–25. https://doi.org/10.1187/cbe.11-08-0064 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reynolds J. A., Thompson R. J. (2011). Want to improve undergraduate thesis writing? Engage students and their faculty readers in scientific peer review . CBE—Life Sciences Education , ( 2 ), 209–215. https://doi.org/­10.1187/cbe.10-10-0127 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rhemtulla M., Brosseau-Liard P. E., Savalei V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions . Psychological Methods , ( 3 ), 354–373. https://doi.org/­10.1037/a0029315 . [ PubMed ] [ Google Scholar ]
  • Stephenson N. S., Sadler-McKnight N. P. (2016). Developing critical thinking skills using the science writing heuristic in the chemistry laboratory . Chemistry Education Research and Practice , ( 1 ), 72–79. https://doi.org/­10.1039/C5RP00102A . [ Google Scholar ]
  • Tariq V. N., Stefani L. A. J., Butcher A. C., Heylings D. J. A. (1998). Developing a new approach to the assessment of project work . Assessment and Evaluation in Higher Education , ( 3 ), 221–240. https://doi.org/­10.1080/0260293980230301 . [ Google Scholar ]
  • Timmerman B. E. C., Strickland D. C., Johnson R. L., Payne J. R. (2011). Development of a “universal” rubric for assessing undergraduates’ scientific reasoning skills using scientific writing . Assessment and Evaluation in Higher Education , ( 5 ), 509–547. https://doi.org/10.1080/­02602930903540991 . [ Google Scholar ]
  • Topping K. J., Smith E. F., Swanson I., Elliot A. (2000). Formative peer assessment of academic writing between postgraduate students . Assessment and Evaluation in Higher Education , ( 2 ), 149–169. https://doi.org/10.1080/713611428 . [ Google Scholar ]
  • Willison J., O’Regan K. (2007). Commonly known, commonly not known, totally unknown: A framework for students becoming researchers . Higher Education Research and Development , ( 4 ), 393–409. https://doi.org/10.1080/07294360701658609 . [ Google Scholar ]
  • Woodin T., Carter V. C., Fletcher L. (2010). Vision and Change in Biology Undergraduate Education: A Call for Action—Initial responses . CBE—Life Sciences Education , ( 2 ), 71–73. https://doi.org/10.1187/cbe.10-03-0044 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Zeineddin A., Abd-El-Khalick F. (2010). Scientific reasoning and epistemological commitments: Coordination of theory and evidence among college science students . Journal of Research in Science Teaching , ( 9 ), 1064–1093. https://doi.org/10.1002/tea.20368 . [ Google Scholar ]
  • Zimmerman C. (2000). The development of scientific reasoning skills . Developmental Review , ( 1 ), 99–149. https://doi.org/10.1006/drev.1999.0497 . [ Google Scholar ]
  • Zimmerman C. (2007). The development of scientific thinking skills in elementary and middle school . Developmental Review , ( 2 ), 172–223. https://doi.org/10.1016/j.dr.2006.12.001 . [ Google Scholar ]

Advertisement

Advertisement

Balancing Emotion and Reason to Develop Critical Thinking About Popularized Neurosciences

  • Open access
  • Published: 07 September 2020
  • Volume 29 , pages 1139–1176, ( 2020 )

Cite this article

You have full access to this open access article

  • François Lombard   ORCID: orcid.org/0000-0002-8933-0385 1 ,
  • Daniel K. Schneider   ORCID: orcid.org/0000-0002-8088-885X 2 ,
  • Marie Merminod   ORCID: orcid.org/0000-0002-8237-0317 3 &
  • Laura Weiss   ORCID: orcid.org/0000-0002-8367-1891 3  

8833 Accesses

10 Citations

2 Altmetric

Explore all metrics

Bioscientific advances raise numerous new ethical dilemmas. Neuroscience research opens possibilities of tracing and even modifying human brain processes, such as decision-making, revenge, or pain control. Social media and science popularization challenge the boundaries between truth, fiction, and deliberate misinformation, calling for critical thinking (CT). Biology teachers often feel ill-equipped to organize student debates that address sensitive issues, opinions, and emotions in classrooms. Recent brain research confirms that opinions cannot be understood as solely objective and logical and are strongly influenced by the form of empathy. Emotional empathy engages strongly with salient aspects but blinds to others’ reactions while cognitive empathy allows perspective and independent CT. In order to address the complex socioscientific issues (SSIs) that recent neuroscience raises, cognitive empathy is a significant skill rarely developed in schools. We will focus on the processes of opinion building and argue that learners first need a good understanding of methods and techniques to discuss potential uses and other people’s possible emotional reactions. Subsequently, in order to develop cognitive empathy, students are asked to describe opposed emotional reactions as dilemmas by considering alternative viewpoints and values. Using a design-based-research paradigm, we propose a new learning design method for independent critical opinion building based on the development of cognitive empathy. We discuss an example design to illustrate the generativity of the method. The collected data suggest that students developed decentering competency and scientific methods literacy. Generalizability of the design principles to enhance other CT designs is discussed.

Similar content being viewed by others

neuroscience of critical thinking

The Uses of the Imagination in Moral Neuroeducation

neuroscience of critical thinking

Disgust and the Limits of Reason: Countering the Fear of Contamination and Resistance to Education in a Post-modern Climate

neuroscience of critical thinking

The role of peers on student ethical decision making: evidence in support of the social intuitionist model

David Ohreen

Avoid common mistakes on your manuscript.

1 Introduction

Socioscientific issues (SSIs) raised by the rapid progress and potential applications of life sciences and technology in areas such as genetics, medicine, and neuroscience challenge students and future citizens with new moral dilemmas. For example, results from recent neuroscience research have attracted considerable attention in the media, with popularized information often claiming that neuroimaging can be used to decipher various human mental processes and possibly modify them. Insights into brain functioning seem to challenge the classical boundaries of psychology, biology, philosophy, and popularized science that students are confronted with. They raise intense and complex SSIs for which there is no large body of ethical or educational reflection (Illes and Racine 2005 ). There are serious issues and some controversy surrounding the confusion of brain activity with mental processes or states of mind (Lundegård and Hamza 2014 ) and the emotive power of brain scans; for example, Check ( 2005 ) and McCabe and Castel ( 2008 ) show that neuroimages can have much greater convincing power than the methods and the scientific data they produce a warrant. Ali et al. 2014 call this phenomenon neuroenchantment . Proper interpretation of the neuroimaging data frequently presented in popularized science is a key epistemological and ethical challenge (Illes and Racine 2005 ) that schools do not generally address, leaving future citizens unprepared to face these new issues. Students need to be better equipped with reasonable thinking for deciding what to believe or do: critical thinking (CT).

What citizens know of science is currently shaped mainly by out-of-school sources such as traditional and social media (Fenichel and Schweingruber 2010 ). Developing CT in students is an important educational goal in many curricula, e.g., the CIIP ( 2011 ) in Switzerland. However, the PISA study shows that there is room for improvement (Schleicher 2019 ). While the internet offers access to invaluable information, the propagation of “fake news” has become a worrying issue (Brossard and Scheufele 2013 ; Rider and Peters 2018 ; Vosoughi et al. 2018 ). Additionally, Bavel and Pereira ( 2018 ) argue that our increased access to information has isolated us in ideological bubbles where we mostly encounter information that reflects our own opinions and values. The overwhelming amount of information available on social media paradoxically does not help understand other opinions; rather, it hinders CT and especially perspective-taking (Jiménez-Aleixandre and Puig 2012 ; Rowe et al. 2015 ; Willingham 2008 ).

Adding to these difficulties regarding CT, neuroscience research has been criticized because of distortions introduced through sensationalist popularization. We adopt a neutral stance towards results published under the label of neuroscience or presented as “brain research.” Education must navigate between naïve adhesion to anything published under the label of neuroscience or popularized as “brain research” and rejection of all neuroscience research because of these sensationalist flaws in its popularization. This study is an attempt to address this challenge and propose a new perspective for helping students develop some difficult aspects of CT that might enhance many classical learning designs. Self-centered or group-centered emotions often hinder CT (Ennis 1987 ; Facione 1990 ). Sadler and Zeidler ( 2005 ) also show that emotive informal reasoning is directed towards real people or fictitious characters. Imagining people’s emotional and moral reactions in these different situations without being overwhelmed by one’s own empathetic emotional reactions is a major difficulty in CT education. While the most basic form of empathy focuses on the emotional aspects of a situation, it blinds us to others (Bloom 2017a ) and hinders decentering. The more advanced cognitive form of empathy (Klimecki and Singer 2013 ) enables decentering and reasonable assessment of moral dilemmas. This article proposes an approach for developing CT that draws not only on rational reasoning but also on understanding others’ emotional reactions (cognitive empathy) to develop the perspective that is needed: thinking independently, challenging one’s own personal or collective interest, and overcoming egocentric values (Jiménez-Aleixandre and Puig 2012 ). Consequently, developing this decentering aspect of CT in students is a central aim of this contribution. In addition, we argue that a proper understanding of methods is also necessary to discuss the potential and limits of research findings, especially in popularized neuroscience. Thus, methodological knowledge is a preliminary and necessary step towards understanding the social and human implications of such scientific results. Therefore, developing scientific methods literacy is a foundational goal of this contribution.

We will develop this new contribution to CT teaching in five steps:

In Section 2 , we will discuss theories that can guide the crafting of learning designs for developing selected CT skills and lead to an original conceptualization focused on decentering when discussing popularized neuroscience. We start by reviewing CT in education and its various definitions and discuss the challenges of its implementation and several approaches. We show through recent literature that attempting to ignore emotions while debating opinions does not reduce their effects on CT. Starting from this, we will discuss the importance of decentering from one’s own values and social belonging in CT and the essential role of empathy in this process. We develop the idea that helping students to discover and understand the scientific methods used in neuroscience research is foundational to imagining its limits and potential as well as others’ moral and emotional reactions. We will argue that focusing the discussion of the SSIs raised on empathetic discussion of these different reactions can enhance decentering skills. We finish by summarizing the design approach.

In Section 3 , we map the theory developed in Section 2 onto educational design principles. We first explain the conjecture mapping technique that we used (exemplified in Section 4 ). We then define learning goals, i.e., the expected effects (EEs), and finish by elaborating design principles in the form of educational design conjectures for decentering CT skills.

In Section 4 , we present, analyze and discuss an example learning design. Learning design as an activity can be defined as design for learning, i.e., “the act of devising new practices, plans of activity, resources and tools aimed at achieving particular educational aims in a given situation” (Mor and Craft 2012 , p. 86). In this study, the learning design is part of the outcome, i.e., a reproducible design. We start by presenting an abstract model based on Sandoval and Bell’s ( 2004 ) conjecture map , a design method developed for design-based research that allows the identification of key elements of a learning design in a way suitable for research and practice. The presented design was developed in 10 iterations over 15 years in higher secondary biology classes (equivalent to high school) in Geneva, Switzerland. We then present the design of the 2018/2019 implementation.

In Section 5 , we present some empirical results based on quali-quantitative data from student-produced artifacts from the 2018/2019 cohort. We also present findings from an end-of-semester survey.

Section 6 summarizes and discusses the main findings, discusses their implications and limitations, and outlines further perspectives.

We formulate two research questions at the end of the theory sections that we summarize as follows: (1) How can a conceptualization that focuses on decentering and methods literacy be implemented through an operational learning design and what are its main design elements? (2) Does an implementation of this learning design help students improve the selected CT skills?

2 Theoretical Framework

2.1 critical thinking in education.

In education, calls to develop critical thinking (CT) in students are frequent. This crucial skill, necessary for citizens to participate in a plural and democratic society, is often lacking among students according to PISA results (Schleicher 2019 ). Science education curricula usually include CT as a learning goal. The official curriculum for Swiss-French secondary schools (CIIP 2011 ) states that “In a society deeply modified by scientific and technological progress, it is important that every citizen masters basic skills in order to understand the consequences of choices made by the community, to take part in social debate on such subjects and to grasp the main issues. In the ever-faster evolution of the world, it is necessary to develop in students a conceptual, coherent, logical and structured thinking, with a flexible mind and a capacity to deliver adequate productions and act according to reasoned choices” (our translation) but then focuses on rational thinking: “The purpose of science is to establish a principle of rationality for the confrontation of ideas and theories with the facts observed in the learner’s world” (CIIP 2011 , our translation). Official educational guidelines often focus on the reason-based aspect of CT, but the emotional aspects of CT are also recognized in some official educational programs. For example, the CIIP ( 2011 ) mentions the learning goal “reflexive approach and critical thinking,” which consists in the “ability to develop a reflexive approach and critical stance to put into perspective facts and information, as well as one’s own actions…” The descriptors include “evaluating the shares of reason and affectivity in one’s approach; verifying the accuracy of the facts and putting them into perspective” (our translation).

One of the most widely cited definitions of CT, by Robert Ennis, introduces the concept as “reasonable reflective thinking, that is focused on deciding what to believe or do” (1987, p. 6). Ennis proposes a list of twelve dispositions and sixteen abilities that characterize the ideal critical thinker. This list and its items “can be considered as guidelines or goals for curriculum planning, as ‘necessary conditions’ for the exercise of critical thinking, or as a checklist for empirical research” (Jiménez-Aleixandre and Puig 2012 , p. 1002). Facione ( 1990 ), in a statement of expert consensus, states, “We understand critical thinking to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based. […] The ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, open-minded, flexible, fair-minded in evaluation, honest in facing personal biases, prudent in making judgments, willing to reconsider, […] It combines developing CT skills with nurturing those dispositions which consistently yield useful insights and which are the basis of a rational and democratic society” (p. 3).

In both texts, the focus is on reasonable thinking, and emotions are only referenced implicitly. For example, Facione’s definition mentions “personal biases,” and the only mention of emotion in the main text is negative: “to judge the extent to which one’s thinking is influenced by deficiencies in one’s knowledge, or by stereotypes, prejudices, emotions or any other factors which constrain one’s objectivity or rationality” (Facione 1990 , p. 10). CT seems to shun emotions. As in philosophy and argumentation, emotions are considered out of place in good reasoning (Bowell 2018 ), and no form of empathy is explicitly taken into account, except within “personal biases.”

A set of Ennis’s CT abilities are related to scientific information literacy: the ability to discuss the limits and potential of scientific information based on a good understanding of the methods and foundations of its elaboration. From a science education point of view, Hounsell and McCune ( 2002 ) propose the ability “to access and evaluate bioscience information from a variety of sources and to communicate the principles both orally and in writing [...] in a way that is well organized, topical and recognizes the limits of current hypotheses” (Hounsell and McCune 2002 , p. 7, quoting QAA 2002 ). We draw from this definition that science does not produce truths but tentative, empirically based knowledge that must be understood within the limits of the conceptual framework and hypotheses that determine the methods that produced this knowledge.

It is also important to define what CT does not mean in this context: it does not imply negative thinking or an obsessive search for faults and flaws in scientific results. CT should not be conflated with a systematic criticism of science, which in some cases has become so strong as to create defiance towards science and scientific methods. CT does not mean discussing only bad examples and exaggerated claims or inferences. Angermuller ( 2018 ) warns, “research critically interrogating truth and reality may serve propagandists of post-truth and their ideological agenda” (p. 2). Furthermore, CT should not mean observance of a teacher’s personal critical views. CT must focus on skills that allow students to reasonably evaluate knowledge on the basis of available evidence and requires recognizing but decentering from personal biases and understanding scientific methods well enough to evaluate the potential and limits of research.

One classical approach in classrooms is argumentation and debating beliefs and opinions (Bowell 2018 ; Dawson and Venville 2010 ; Dawson and Carson 2018 ; Duschl and Osborne 2002 ; Jiménez-Aleixandre et al. 2000 ; Jonassen and Kim 2010 ; Legg 2018 ). Additionally, learning progressions organizing skills into different stages have been well discussed (Berland and McNeill 2010 ; Plummer and Krajcik 2010 ). Osborne ( 2010 ) writes that much is understood about how to organize groups for learning and how the norms of social interaction can be supported and taught. For example, Buchs et al. ( 2004 ) show that debate is most efficient as a learning activity when it is very specifically organized to favor epistemic rather than relational elaboration of conflict. This requires ignoring emotions (and implicitly any form of empathy) to focus on rational discussion. Constructive controversy has been demonstrated to be very efficient at identifying the best group answer on a specific question (Johnson and Johnson 2009 ), but focuses—remarkably well—on keeping the debate rational and does encourage decentering through role exchange; however, in our view, it is not specifically focused on handling the emotions and empathetic reactions that some very sensitive issues can raise, as Bowell ( 2018 ) shows.

Teachers who attempt to organize classroom debates or argumentation often encounter great difficulty in doing so (Osborne 2010 ; Simonneaux 2003 ). They often feel ill-trained and worried about handling the emotional reactions and value conflicts that arise during discussions and arguments about SSIs. Ultimately, they frequently refrain from debates (Osborne et al. 2013 ) or confine themselves to the apparently safe boundaries of rationality. How student groups can be supported to produce elaborated, critical discourse is unclear according to Osborne ( 2010 ). An unusual approach was proposed by Cook et al. ( 2017 ). They describe it well in their title: “Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence.” This immunological metaphor of exposing students to possible biases and manipulations in advance as a strategy for developing CT skills contrasts with approaches where students are protected from and cautioned against such information, which is in turn dismissed. We consider here how to face the educational challenge and address the difficult new SSIs raised by scientific advances—notably in neuroscience.

While this article is not about conceptual change, which is the subject of abundant research, including Clark and Linn ( 2013 ); diSessa ( 2002 ); Duit et al. ( 2008 ); Ohlsson ( 2013 ); Posner et al. ( 1982 ); Potvin ( 2013 ); Strike and Posner ( 1982 ); and Vosniadou ( 1994 ), it is worth noting that conceptual change also cannot be fully understood without considering the effects of beliefs—especially on some subjects such as evolution (Clément and Quessada 2013 ; Sinatra et al. 2003 ; Potvin 2013 ). Tracy Bowell ( 2018 ) insists that against deeply held beliefs, rational argument cannot suffice: “Although critical thinking pedagogy does often emphasize the need for a properly critical thinker to be willing (and able) to hold up their own beliefs to critical analysis and scrutiny, and be prepared to modify or relinquish them in the face of appropriate evidence, it has been recognized that the type of critical thinking instruction usually offered at the first-year level in universities frequently does not lead to these outcomes for learners” (p. 172).

Discussing SSIs engages opinions. Roget’s Thesaurus defines opinions as views or judgments formed about something, not necessarily based on fact or knowledge. For Astolfi ( 2008 ), opinion “is not of the same nature as knowledge. The essential question is then no longer to decide between the points of view expressed as to who is right and who is wrong. It is to access the underlying reasons that justify the points of view involved” (p. 153, our translation). Among others, Legg ( 2018 ) discusses how difficult—even for professional thinkers—forming a well-built opinion is. We will not address this thorny philosophical question here but discuss how to develop decentering skills with 18- to 19-year-old high school biology students discovering recent popularized research. The central point in this article is not about deciding which opinion is correct or socially acceptable in the specific social and cultural environment of students or even which opinion the current state of scientific knowledge supports. Jiménez-Aleixandre and Puig ( 2012 ) highlight the importance of thinking not only reasonably but also independently . This text discusses putting into perspective the rational reasons with emotional and empathetic reactions that justify one’s own opinions through understanding that others might have other underlying reasons and emotional and empathetic reactions leading to different opinions, calling for decentering skills.

It would seem natural to discuss opinions. However, discussing students’ opinions in the multicultural classrooms of today could hurt personal, cultural, or religious sensitivities and can be counterproductive (Bowell 2018 ). Research has shown that many forms of debate, e.g., debate-to-win (Fisher et al. 2018 ), can unintentionally modify participants’ opinions (Simonneaux and Simonneaux 2005 ). Abundant social psychology research has shown, for example, that holding one point of view in a debate modifies the arguer’s opinion (Festinger 1957 ; Aronson et al. 2013 ). Cognitive dissonance reduction has long been identified as an obstacle to accepting new ideas (Festinger 1957 ). Indeed, debating well-established opinions with students or even inexperienced scholars can easily lead to the entrenchment of personal opinions (Bavel and Pereira 2018 ; Legg 2018 ). This raises serious ethical questions: some learning designs might influence the opinions of students or might even become manipulative, unconsciously leading students to observance of the teacher’s personal outrage or opinion. Creating fair, respectful, and productive opinion debates in the classroom setting is difficult. The emotional reactions of teachers and students can get out of hand. Biology teachers are sometimes afraid of students’ reactions when discussing socially loaded topics such as the mechanisms of evolution (Clément and Quessada 2013 ), possibly confusing the well-established explanatory power of evolutionary scientific models with beliefs and opinions students might have. In Switzerland, biology curricula require students to be able to use these scientific models to explain observed phenomena and predict, for example, the consequences for a species of variations in the environment but not to adhere to any specific belief.

For many, a focus on rational and independent thinking should restrict the role emotions play in the opinion building process. Jiménez-Aleixandre and Puig ( 2012 ) mention, “Although we think that it is desirable for students (and people) to integrate care and empathy in their reasoning, we would contemplate purely or mainly emotive reasoning as less strong than rational reasoning” (p. 1011). This concern about the threat of emotion-only reasoning could be understood by some readers to imply that rational thinking processes alone should guide independent opinion building to allow decentered thinking and that empathy should not be encouraged. It does not appear realistic to expect this of 19-year-old students, and we will discuss below how ignoring emotions in opinion building processes might in fact increase their influence.

Rider and Peters ( 2018 ) discuss free thinking, and Legg ( 2018 ) stresses how social media could lead users to avoid encountering any viewpoints or arguments that contradict their own, discussing how professional thinkers and writers seek better opinions by confronting others’ opinions. In her final line, she encourages readers to “[listen] well to those with contrary opinions—even those who promote them most aggressively—since, in the epistemic as opposed to the political space, as ever, ‘the [only] solution to poor opinions is more opinions’” (p. 56). She suggests seeking further information before behaving as if one has certainty as a way to overcome the arrogant assumed certainty that is a dismaying feature of our current regime. We fully agree with the need to take into account differing and contrary opinions: a good capacity for decentering is indeed central to CT, but how this can be achieved is a challenge that cannot be tackled without taking into account emotions and dealing with different forms of empathy.

With young students in particular, social belonging and emotions cannot be ignored. Bowell ( 2018 ) shows in an example that “students’ deeply held beliefs […] had been formed in the environments of their families and their communities. […] By recognizing and acknowledging the emotional weight of the students’ deeply held beliefs about climate change and their suspicion toward scientists and the evidence they produce, the teacher found a way to disrupt those beliefs” (p. 183). For 19-year-old students, asking for rational debate while ignoring emotions might be quite problematic for some SSIs. Since CT can be challenged by emotionally overwhelming reactions, without developing skills to decenter students from their own emotional and empathetic responses, many educational designs based on debate might not develop their full potential.

In summary, educational strategies for rational debate have substantial potential to promote science and CT and are often used in schools where CT is pursued; however, it appears, as PISA results show (Schleicher 2019 ), that there is still room for improvement. New learning designs specifically aimed at balancing reason and emotional reactions may contribute to increasing CT skills. Such designs should probably include learning to deal with the different forms of empathy that will be discussed below and could be implemented before setting up debates or possibly even before students develop their own opinions about the new SSIs raised by the abundance of neuroscience research.

2.2 Emotions and Decentering in Critical Thinking

Recent research adds evidence to what psychologists and some philosophers have long argued, namely, that opinion building and moral decisions cannot be understood solely as cold, objective, and logical (Young and Koenigs 2007 ; Decety and Cowell 2014 ; Narvaez and Vaydich 2008 ; Goldstein 2018 ) and that rational-only approaches cannot suffice to guide educational interventions on SSIs (Bowell, 2018 ). According to Sander and Scherer ( 2009 , pp. 189–195), emotion is a process that is fast, focused on a specific event, and triggers an emotional response . It involves 5 components: expression (facial, vocal or postural), motivation (orientation and tendency for action), bodily reaction (physical manifestations that accompany or precede the emotion), feeling (how the emotion is consciously experienced), and cognitive evaluation (interpretations that make sense of emotions and induce them). These interpretations differ across people, moments, individual memories, values, and social belongings, implying complex relationships among emotions, values, and “reason” and indicating how much emotional responses to the same situations can vary according to personal, cultural, and social characteristics. Emotions affect attention to and the salience of specific aspects of a situation (Sander and Scherer 2009 ) and can lead to focusing only on some aspects of the triggering situation and ignoring others. For example, negative emotions narrow the attentional focus and one’s ability to take others’ emotions, such as pain, into account (Qiao-Tasserit et al. 2018 ). Positive emotions (Fredrickson 2004 ; Rowe et al. 2007 ) can broaden people’s attention and thinking, but negative emotions tend to reduce judgment errors and result in more effective interpersonal strategies (Forgas 2013 ; Gruber et al. 2011 ).

The role played by emotions in opinion building has often been considered detrimental (Facione 1990 ; Ennis 1987 ). However, Tracy Bowell ( 2018 ) argues for “ways in which emotion and reason work together to form, scrutinise and revise deeply held beliefs” (p.170). Sadler and Zeidler ( 2005 ) insist on “the pervasive influence emotions have on how students frame and respond to ethical issues” (p. 115), and it appears there is an agreement that opinion building cannot be understood as only objective and logical. Adding empirical evidence to Sadler and Zeidler ( 2005 ) in a way, Young and Koenigs ( 2007 ) use fMRI data to show that emotions not only are engaged during moral cognition but are in fact critical for human morality and opinion building. Confirming in-group biases identified by social psychologists, neuroscience research suggests that thinking about the mind of another person is done with reference to one’s own mental characteristics (Jenkins et al. 2008 ) and can therefore interfere with and thwart decentering attempts. Vollberg and Cikara ( 2018 ) showed that in-group bias can unknowingly influence emotions and opinions in favor of the priorities and interests of the group. We see this new evidence as convergent with the discussion by Sadler and Zeidler ( 2005 ) of the interactions between informal (rationalistic, emotive, and intuitive) reasoning patterns that occur when students think about SSIs.

We have seen that both Ennis ( 1987 ) and Facione ( 1990 ) support the importance of decentering from one’s own point of view, emotions, and values in order to be able to take into account other, potentially conflicting perspectives. De Vecchi ( 2006 ) also differentiates levels of CT, with the highest level being “Debating one’s own work as well as that of others in a cooperative manner. Positively discussing objections from others and taking them into account” (p. 180, our translation). Jiménez-Aleixandre and Puig ( 2012 ) emphasize thinking independently, challenging one’s own personal or collective interests and overcoming egocentric values. Piaget ( 1950 ) used the term décentration (often translated as decentering ) to describe the progressive ability of a child to move from his or her “necessarily deforming and egocentric viewpoint” to a more objective elaboration of “the real connections” between things (p. 107–108, our translation). This move implies disengaging the object from one’s immediate action to locate it in a system of relations between things corresponding to a system of operations that the subject could apply to them from all possible viewpoints. The capacity for “putting oneself in another’s shoes” and envisioning the complex potential intentions and mental states of others, also referred to as the theory of mind or cognitive empathy, begins developing in young children around the age of 2 and appears to be unique to humans and a few other animals (Call and Tomasello 2008 ; Seyfarth and Cheney 2013 ).

This particularly highlights the relevance of decentering to independent opinion building processes in our multicultural, connected world, where sensationalism, speed, and immediacy challenge one’s capacity to put into perspective one’s own opinion or emotional reactions. The SSIs raised by neuroscience research include sensitive issues such as claims in popularized media about deciphering various human mental processes (e.g., the placebo effect (Wager et al. 2004 ), face identification from neuron activity measurements (Chang and Tsao 2017 ), and vengeance control (Klimecki et al. 2018 ) and possibly modifying them (e.g., activating brain areas to control pain (deCharms et al. 2005 )) that could elicit strongly differing moral views across the diversity of social and religious belongings or personal values and monistic or dualistic views about the mind. Helping students to think independently from their moral perspective about such issues calls for teaching designs specially geared towards developing decentering skills, not just requiring them.

The process of forming an independent opinion about a given SSI should therefore include two dimensions: (1) awareness that one’s point of view and emotional reaction towards a situation are not necessarily the only ones; (2) the capacity to understand and take into account other possible emotional reactions than one’s own without necessarily adhering to them.

Jiménez-Aleixandre and Puig ( 2012 ), as they highlight the importance of thinking not only reasonably but also independently , point out that CT should include the challenge of argument from authority (traditional authority of position (Peters 2015 )) and the capacity to criticize discourses that contribute to the reproduction of asymmetrical relations of power. They distinguish four main components of CT:

The ability “to evaluate knowledge on the basis of available evidence [...]”

The display of critical “dispositions, such as seeking reasons for one’s own or others’ claims [...]”

The “capacity of a person to develop independent opinions [...] as opposed to relying on the views of others (e.g., family, peers, teachers, media)”

“the capacity to analyze and criticize discourse that justifies inequalities and asymmetrical relations of power.” (p. 1002)

For these authors, while the first two components belong to argumentation, the other two have to do with social emancipation and citizenship. This socially decentered dimension of CT highlights the importance of the skills this project focuses on: “the competence to develop both independent opinions and the ability to reflect about the world around oneself and participate in it. It is related to the evaluation of scientific evidence [...], to the analysis of the reliability of experts, to identifying prejudices [...] and to distinguishing reports from advertising or propaganda. Thinking critically [...] could involve challenging one’s own personal or collective interest and overcoming egocentric values” (p. 1012).

We will refer to decentering as the ability to put one’s first emotional reactions in perspective and take into account different, contradictory values and emotional reactions other people (with different values, social contexts, and beliefs) might have in a given situation—real or imagined.

2.3 Empathy as a Skill for Decentering in Critical Thinking?

Singer and Klimecki ( 2014 ) write that perspective-taking ability is the foundation for understanding that people may have views that differ from our own and that moral decisions strongly imply empathic response systems. Empathy is “a psychological construct regulated by both cognitive and affective components, producing emotional understanding” (Shamay-Tsoory et al. 2009 , p. 617). Empathy is often considered a positive, benevolent emotional reaction, but some forms of empathy can hinder decentering. Bloom 2017a , b ) highlights the ambiguous role of emotional empathy in moral reasoning: he argues that empathy is fraught with biases, including biases towards attractive people and for those who look like us or share our ethnic or national background. Additionally, it connects us to particular individuals, real or imagined, but is insensitive to others, however numerous they may be (Bloom 2017a ). He compares empathy to a searchlight: it focuses on one aspect of the situation and the emotions it causes but leaves in darkness the other emotional reactions that people with different values or in different situations might have; therefore, some forms of empathy do not facilitate perspective-taking. Klimecki and Singer ( 2013 ) distinguish two empathetic response systems. The first response type, emotional empathy, focuses the attention of subjects through the emotions the situation evokes but blinds them to other people’s reactions and leads to self-oriented behavior. A second type of response, cognitive empathy (which we consider to be similar to Sadler and Zeidler’s emotive reasoning), helps one understand the emotional reactions and perspectives of those with different values or from different cultures and is a critical decentering skill. For Shamay-Tsoory et al. ( 2009 ), emotional empathy is developed early in infants and acts as a simulation system ( I feel what you feel ) involving mainly emotion recognition and emotional contagion. Cognitive empathy develops later and relies on “more complex cognitive functions,” such as the “mentalizing” or “perspective-taking” system: the ability to understand another person’s perspective and to feel concerned for what the other feels without necessarily sharing the same feelings. The first form of empathy is problematic (Bloom 2017a ), because sharing the negative emotions of others can paradoxically lead to withdrawal from the negative experience and self-oriented behavior. Cognitive empathy allows for a more distant and balanced appraisal of a situation: it results in positive feelings of care and concern and promotes prosocial motivation. It also helps one understand the emotional reactions of others who have different values and social belongings, which is necessary for decentering in CT.

We have seen that opinion building cannot be considered a cold and rational process and that many biases prevent individuals from understanding others’ emotional reactions, which hinders independent thinking in CT. Some forms of empathy, also called perspective-taking, theory of mind, empathy, or sympathy, might mitigate this problem; therefore, we will discuss their implications for thinking about SSIs. Sadler and Zeidler ( 2005 ) show that empathy “has allowed the students to identify with the characters in the SSI scenarios and allow for multiple perspective-taking” (p. 115). Furthermore, they describe how emotive reactions can help students imagine others’ reactions and describe informal reasoning as involving empathy, a moral emotion characterized by “a sense of care toward the individuals who might be affected by the decisions made” (p. 121). This informal emotive reasoning is rational and rooted in emotion and differs from rationalistic reasoning. The authors insist that emotive patterns can be directed towards real people or fictitious characters. We assume that empathy (emotive reactions) directed at real or imagined people could be used in education to help students develop a decentered perspective. Complex decisions involving contradictory moral principles strongly imply empathy (Sadler and Zeidler 2005 ). While Sadler and Zeidler ( 2005 ) mention the importance of emotive informal thinking, this skill is not generally addressed when designing education about SSIs.

Shamay-Tsoory et al. ( 2009 ) suggest that emotional and cognitive empathy rely on “distinct neuronal substrates.” Singer and Klimecki ( 2014 ) also show that the plasticity of these systems allows cognitive empathy to be trained to some degree in a few sessions. Overall, these neuroscientific results suggest that cognitive and emotional systems are complex and concurrent and might well be separate within the brain. While measures of activity , from which empathy is inferred in ways the scientific community recognizes, cannot be considered from a philosophical point of view as proof, it is scientific evidence that is worth considering for learning design. This could imply that cognitive empathy can be activated and trained without necessarily activating emotional empathy. Educational designs that develop cognitive empathy and decentering might help students to “think independently, challenging [their] own personal or collective interest and overcoming egocentric values” while reducing the pitfalls of “emotions […] which constrain one’s objectivity or rationality” (Facione 1990 , p. 12). This is the challenge this research focuses on. Cognitive empathy, so crucial for decentering, is not generally developed in schools. Debate-based learning designs that do not distinguish between emotional and cognitive empathy might not realize their full potential because of previous emotionally biased opinions. This could explain some of the difficulties felt by many about purely or mainly emotive reasoning and the limits of intuitive reasoning (Jiménez-Aleixandre and Puig 2012 ). The conceptualization we develop here suggests pursuing a new approach for developing decentering competency: developing cognitive empathy for the emotional reactions of others while refraining from emotional empathy in the process of building independent opinions.

2.4 Understanding Science Methods to Develop CT

Methods are at the core of research paradigms (Kuhn 1962 ) and determine a good part of the potential and limits of scientific research (Lilensten 2018 ). Therefore, some understanding of research techniques and methods is required to assess the scope (including the limits, implications, and potential uses) of research results (Hoskins et al. 2007 ). Facione ( 1990 ) also insists on the necessity of a proper domain-specific understanding of methods. One implication the experts draw from their analysis of CT skills is this: “While CT skills themselves transcend specific subjects or disciplines, exercising them successfully in certain contexts demands domain-specific knowledge, some of which may concern specific methods and techniques used to make reasonable judgments in those specific contexts” (p. 7).

Methods and their limits are often ignored by teachers (e.g., Waight and Abd-El-Khalick 2011 ; Kampourakis et al. 2014 ). Didactic transposition (DT) theory (Chevallard 1991 ) investigates how knowledge that teachers are required to teach is transformed during the process of selection into curricula and adaptation to teacher values and classroom requirements. The methods that produce research results are generally not thoroughly discussed with students. The large body of research on DT shows that to be easily teachable, exercisable, and assessable, classroom knowledge generally becomes definitive and is often reduced to assertive conclusions (Lombard & Weiss 2018 ).

Understanding the limits of neuroscience research results, especially neuroimaging results, is a particular challenge. A proper understanding of the methods used is needed to understand the limits of such research and develop a critical perspective to overcome neuroenchantment (Ali et al. 2014 ). There is a risk that activities might be understood as objects and essential concepts and that inferences of the engagement of a specific cognitive process from brain activation observed during a task might be overinterpreted (Nenciovici et al. 2019 . While research articles are required to discuss the limits of their claims, proper interpretation of the neuroimaging data commonly found in popularized science is a critical challenge (Illes and Racine 2005 ), and students are not often presented primary literature. Rather, they encounter transposed versions where claims and simplified interpretations are typically presented as definitive without discussion of the limits that the methods imply. Indeed, there are many issues with the emotive power of brain scans; for example, Check ( 2005 ) and McCabe and Castel ( 2008 ) show that neuroimages can have much more convincing power than the methods and the scientific data they produce warrant, leaving future citizens unprepared to face new issues as they arise. We will refer to this solid understanding of the methods required to assess the limits and potential uses of research as scientific method literacy .

Since methods are generally absent or insufficiently represented in the popularized science that students are confronted with (Hoskins et al. 2007 ), this has an important implication: in order to discuss SSIs, it is necessary to refer to the original article to obtain a proper understanding of the potential uses and limits of the research. Having secondary or high school students use primary literature with some help has been shown to be possible and, in fact, beneficial for a good understanding of science (Yarden et al. 2009 ; Falk et al. 2008 ; Hoskins et al. 2007 ; Lombard 2011 ).

From this literature, we draw the need for what we call scientific methods literacy, in this context defined as the ability to understand scientific techniques and methods sufficiently to imagine potential uses and limits. This will generally imply some access to primary literature.

2.5 Educational Design for Decentering CT Skills

Let us recall that we aim to propose and discuss a new learning design to develop a selection of students’ skills for CT about SSIs in neuroscience. More precisely, we aim to foster an independent opinion building. The aims of this article are (1) to translate the new conceptualization emerging from the theoretical framework into an instructional design that develops the selected CT skills in higher secondary biology classes, (2) to describe this design, and (3) to analyze and discuss the results produced by this design in its final iterative refinement. Our literature review identified two crucial skills that learners should develop to improve their CT: (i) decentering skills: the ability to decenter from one’s first emotional reactions and take into account different, contradictory values, and emotional reactions; (ii) certain scientific methods literacy skills: specifically defined here as the ability to understand scientific techniques and methods sufficiently to imagine potential uses and limits. Not discussed in this article but also relevant are other scientific information literacy skills, i.e., the ability to select and understand scientific articles and to produce text according to typical scientific practice. Below, we shall briefly outline the overall design approach, the learning goals, and the main guiding principles that can be used to generate specific learning designs such as the one presented in Section 4 .

Learning is a process that can be guided and encouraged but not imposed. “One of the ways that teaching can take place is through shaping the landscape across which students walk. It involves the setting in place of epistemic, material and social structures that guide, but do not determine, what students do” (Goodyear 2015 , p. 34). In that view, the materials and resources presented do not automatically map to learning gains; rather, the cognitive activities learners effectively practice determine the learning. Accordingly, the epistemic, material, and social structures (practical activities and productions) must be designed to encourage these cognitive activities. Goodyear ( 2015 , p. 33) explains that “The essence of this view of teaching portrays design as having an indirect effect on student learning activity, working through the specification of worthwhile tasks (epistemic structures), the recommendation of appropriate tools, artefacts and other physical resources (structures of place), and recommendation of divisions of labor, etc. (social structures).”

Thinking of teachers as designers offers methods for dealing with complex issues, reframing problems, and working with students “to test and expand the understanding of the problem. Reframing the problem, for example by seeing the problem as a symptom of some larger problem, is a classic design move” (Goodyear 2015 , p. 35). Successive iterations of the design in this project led to the new conceptualization of CT about popularized neuroscience presented here. “Typically, design-based research imports researchers’ ideas into a specific educational setting and researchers then work in partnership with teachers (the local inhabitants) to develop, test and refine successive iterations of an intervention” (Goodyear 2015 , p. 41). Design is not a one-way process by which theory is applied to practice; Schön ( 1983 ) has shown that in the development of expertise, theory is informed by practice as much as practice is informed by theory, in a continuous process. This study is design-based research (DBR), a research paradigm that was developed as a way to carry out formative research for testing and refining educational designs based on theoretical principles derived from prior research (Barab 2006 ; Brown 1992 ; Collins et al. 2004 ; Sandoval and Bell 2004 ). In DBR, iterations of the design produce conclusions—including an enrichment of the theoretical framework and derived design rules—that lead to the optimization of the design and are fed into the next iteration. “Design-based research progresses through cycles of theoretical analysis, conjectures, design, implementation, analysis and evaluation which feed into adjusting the theory and deriving practical artefacts” (Mor and Mogilevsky 2013 , p. 3). Analyzing the data from each design cycle led to reframing the problem and clarifying and focusing the education goals, which raised new research questions that in turn led to obtaining data more relevant to these renewed questions in the next iteration.

According to Collins et al. ( 2004 ), DBR is focused on the design and assessment of critical design elements. It is particularly well suited for exploratory research on learning environments with many variables that cannot be controlled individually—which rules out experimental or pseudoexperimental paradigms. Instead, design researchers try to optimize as much of the design as possible and to observe carefully how the different design elements are operating. As a qualitative approach, DBR is well suited to the creation of new theories (Miles et al. 2014 ). This choice is also ethically justified, since this is not a short experimental intervention but a semester-long course in which tightly controlled conditions might not offer the best learning conditions: in DBR, the design is iteratively adapted and offers to students the benefit of the best available design the research can provide at any time (Brown 1992 ). Better, more relevant data from each iteration were used to extract design principles and optimize the design offered to students the following year. DBR is similar to action research (Greenwood and Levin 1998 ) in the tightly interwoven student, teacher, and researcher implication and the feeding of information back to the community. In DBR, however, the design itself is the object of research and provides valuable insight into learning processes. Compared with other research paradigms, DBR is less about comparison with other published designs than about producing better questions, developing workable designs, and proposing design rules.

From this multiyear DBR approach emerged (i) the new conceptualization on which this article is based, (ii) the identification of educational goals focused on decentering skills and scientific methods literacy, (iii) the design principles presented in Section 3 , and (iv) the methods for obtaining and discussing data relevant to these goals presented in Section 4 .

3 From Theory to Design Conjectures

The method we used to guide the design of this educational module is strongly inspired by Sandoval and Bell 2004 ’s conjecture maps . We explained this method elsewhere and how we used it to help teachers in training to create, implement, and reflect on their educational designs (Lombard, Schneider & Weiss 2018 ). Central in this approach is the role of embodied conjectures . These are “design conjectures about how to support learning in a specific context, that are themselves based on theoretical conjectures of how learning occurs in particular domains” (Sandoval and Bell 2004 , p. 215). In our model, conjectures (CJs) are implemented as design elements (DEs), which are specific items (generally activities that can be enacted) introduced into the design to produce educational effects, called expected effects (EEs), such as understanding and perspective-taking. These outcomes, being abilities or competencies (EEs here), are not directly measurable (Miles et al. 2014 ), and we therefore look for performed, observable activities that reflect them. EEs are therefore assessed through observable effects (OEs), such as student productions, observations, or other traces in which relevant indicators can be measured. The codebook used for the research is available in Appendix Table 1 . In the proof-of-concept design, a simplified version was used by the teacher for assessment; the OEs used to measure the EEs are described in Section 4.2 . The DEs describe and assess the effects of the critical design elements specifically introduced to implement the CJs. They imply that a basic workable learning design is available, e.g., analyzing articles in the category information processing models described by Joyce et al. ( 2000 ) and that teachers have the skills to implement this classical design. To summarize, conjecture maps explicitly state how conjectures (CJs), i.e., contextualized theoretical constructs, will be implemented with d esign e lements (DEs), what the e xpected educational e ffects (EEs) are, and how these can be measured with o bservable e ffects (OEs) by teachers and researchers. Researchers and teachers use the same data but analyze them differently for different purposes. Teachers use OEs to measure student progression for formative assessment (Brookhart et al. 2008 ), for diagnostic assessment (Mottier Lopez 2015 ), to inform student guidance, or for student certification. Researchers in this project used these data to assess the efficiency of the design, i.e., to discuss the relevance of the OEs as measures of the EEs and the efficiency of the DEs in producing the EEs and to possibly question the CJs.

Educational strategies aiming to develop perspective-taking should be specifically designed to help students imagine and understand emotional and moral reactions to new research that are different from their own. Based on our theoretical discussion, the precise learning goals we aim to develop are scientific methods literacy and decentering competency. To compose the conjecture map (Sandoval and Bell 2004 ), we decompose these into four operationalized key skills, the expected effects (EEs):

Scientific information literacy : the ability to find, select, and use scientific text .

EE1 : identify the typical, structural elements of a scientific article (the ones often missing in a popularized article), such as the methods and references section and communicate these elements, accurately and concisely, orally, and in writing.

EE1 is part of the design but is not analyzed in this article.

Scientific method literacy : The ability to understand how the research was carried out.

EE2 : understand the techniques and methods presented in the scientific articles in order to assess the limits of scientific claims and identify several plausible possible uses of the techniques and methods introduced in the article.

Decentering competency : The ability to take some distance from one’s own emotional reactions to moral issues and to imagine and/or take into account other possible moral principles.

EE3 : imagine different moral reactions to the possible uses of the techniques and methods presented in the article under study.

EE4 : realize that one’s own reactions are not unique and consider other moral principles to assess each potential use without expressing one’s opinion.

The main point here is helping students realize that their own opinions are influenced by an ensemble of personal values and social belongings that are not absolute and can be put into perspective in order to develop decentering skills for CT. Values can be loosely defined here as what grounds a person’s judgments about what is good or bad and desirable or undesirable.

To inform the design of a learning environment to develop these educational goals, we summarize the theory discussed into a set of CJs. In other words, the educational design process is to be guided by several design hypotheses that we call CJs (Sandoval and Bell 2004 ). Each is explained below:

CJ1: Reading and analyzing scientific articles helps students improve the structure and content of their own scientific texts. Learners have to search the primary literature for specific knowledge, such as methods, and are guided to recognize and become familiar with the structure of scientific articles (Falk et al. 2008 ; Hoskins et al. 2007 ) and to elaborate their analysis in an imposed structure. Practiced repeatedly with constructive feedback, this is expected to improve their scientific literacy (Hand and Prain 2001 ).

CJ2: Sufficient understanding of the techniques and methods is needed to imagine the potential uses and limits of the student-studied research. We have seen that methods are often ignored in science teaching. Let us consider a recent paper presenting a method for producing images of the faces seen by a subject based on measurements of the neuronal activity of 200 brain neurons (in macaques) during facial visualization (Chang and Tsao 2017 ). Potentially, images of what a macaque—and probably a person—is seeing, remembering, and imagining could be produced on a computer screen with this neuroscience technique. Potential uses of this technology that raises important SSIs could include eventually being able to identify a criminal suspect’s face by recreating an accurate image of the face through neuronal analysis of the victim’s brain (a sort of direct, brain-to-paper police sketch). A good understanding of the research methods used and their limits is needed to assess the plausibility of this potential use.

CJ3: An array of potential uses of the scientific techniques studied can set the stage for cognitive empathy. Let us recall that emotional-only empathy and biases might narrow the attentional focus and prevent students from taking into account other possible emotional reactions by people with different values, from different social groups, etc. Additionally, debating opinions can unwittingly modify students’ opinions and could trigger personal, cultural, or religious sensitivities in the multicultural classrooms of today. This leads us to restrain students from stating their opinion. To encourage decentering and cognitive empathy, the theoretical discussion presented leads us to propose discussing potential new situations in which students can imagine what different people—with different values, from different cultures, etc.—could potentially use this new technique to do. In an abstract discussion of SSIs, it might be difficult to evoke others’ emotional reactions, since cognitive empathy is a process that requires imagining people’s reactions. It follows that SSIs should be contextualized in situations that the students can relate to and in which they can imagine others and their reactions.

CJ4: Framing SSIs as evoking different emotional reactions and expressing them in terms of conflicting values without mentioning one’s own opinion can develop decentering skills. Students should be encouraged to imagine possible uses, even some that might seem unacceptable to them, in order to explore possible reactions from people with different values and from different cultures and to use cognitive empathy in order to learn how to decenter when encountering a thorny and difficult SSI. Learners are encouraged to restrain their emotional empathy but to foster cognitive empathy, which is central to decentering. As an example, neuroimagery can be used to measure pain experience (Wager et al. 2004 ). The technique (the specific use of fMRI found in the methods) has many potential uses: to compare the effectiveness of and improve pain treatment, to detect fraudulent or simulated illness for insurance purposes, even to compare the pain induced by different torture treatments, etc. These situations can help students imagine the emotional reactions of other people. Refraining from expressing personal opinions could ultimately help to put them into perspective and discover the moral reasons that might cause rejection or adoption of this particular use. These can be expressed as dilemmas.

From the operational formulation of scientific literacy and decentering competency learning goals as four key skills, expressed here as EEs, and the theoretical design constructs, expressed as CJs (CJ1–4), we formulate the following research subquestions:

RQ1: How can this conceptualization (the CJs and EEs) be implemented into an operational learning design, and what would be the main DEs? More precisely,

How can activities that develop scientific methods literacy skills (learning goal EE2) be designed?

How can activities that develop decentering abilities (learning goals EE3 and EE4) be designed?

RQ2: Does the learning design help students improve the selected CT skills? This RQ2 is also divided into two subquestions:

What evidence can be found that the design improves scientific methods literacy skills in students?

What evidence can be found that the design improves decentering abilities in students?

4 From Design to a Proof-of-Principle Implementation

Our global research approach—DBR—has already been described in Section 2.5 . Here, we describe the context and the method used to collect and analyze qualitative student data from a proof-of-principle semester course. The module was designed and implemented in a higher secondary biology class in Geneva, Switzerland, by one of the authors Footnote 1 beginning in 2003. It was conducted over a period of 15 years with a total of ten different cohorts of students and refined after each implementation. The module we discuss was first implemented in autumn 2002–2003 and improved through 10 iterations until 2018–2019. In this contribution, we present and discuss the latest version of the design.

Over the course of this study, deep societal transformations, including the emergence of social media and the political turmoil caused by fake news or “alternative facts,” resulted in a shift in the goals of the design and implementation. Additionally, theoretical input from research on science epistemology and CT led to a clearer conceptualization and a better focus of the design, which is intrinsic to the DBR paradigm. Over a decade and a half, this project moved from an initial focus on discovering recent bioscience research that would be relevant for future citizens to a second, that is, discussing the nature of science. This led us to consider scientific methods literacy, which is needed to properly understand and put into perspective research findings. Furthermore, an explicit focus on developing and strengthening CT skills emerged—at a time when awareness of CT was gaining in importance. The classes also focused more specifically on neuroscience research, as it was gaining media coverage. Students’ difficulty in formulating independent opinions about complex and new SSIs that raised emotional reactions became more apparent. This eventually led us to explore various designs that encourage learners to put into perspective their own opinions when discussing SSIs and that develop decentering skills. The theoretical input from empathy research (Singer and Klimecki 2014 ) led to a focus on cognitive empathy. Taking into account Shamay-Tsoory et al. ( 2009 ) led to the exploration of possible design elements specifically geared towards practicing cognitive empathy to take emotions into account without reinforcing emotional biases and emotional empathy. Attempts to manage this while avoiding the pitfalls of opinion debate led to the focus on identifying dilemmas in the learning design principles and the proof-of-principle design (2018/2019 implementation) presented here.

4.1 Population, Data Collection, and Analysis

The data sources are student-produced artifacts—written papers from 2 to 3 home assignments and a written exam—and responses from an individual online anonymous survey administered at the end of the semester to assess students’ perceptions of their CT skills, specifically, decentering and scientific methods literacy.

In the Geneva higher secondary curriculum, students choose at the age of 16 one optional class (OC) composed of 4 semester-long modules (2 periods weekly). Students cannot choose their OC within their major, so students in this study neither have a strong background in biology nor in science generally. This study took place in the third module (ages 18–19). Classes included 13 to 24 students. Other modules with other teachers treated human’s influence on the environment and climate change, neurobiology, and microbiology. Data on student progression were collected from the cohort (13 students) of the autumn 2018–2019 semester. Four papers were analyzed: two to three written assignments handed in during the semester (3–8 pages, graded) and the final exam, each analyzing a different recent article about neuroscience. One student did not hand in all the assignments, so her data were omitted, leaving a cohort of 12 students whose data are presented in Fig.  3 . All 13 completed the survey.

The third assignment was not mandatory for students who obtained full marks on assignments 1 and 2, so only 7 students handed in the third assignment. We analyzed the results of assignments 1 and 2 and the final exam. All 13 students gave permission for their anonymized papers to be analyzed for research purposes.

Data analysis was performed using mixed quali-quantitative methods (Miles et al. 2014 .

To answer the second research subquestion, we present and compare the students’ first paper (completed at the very beginning of the semester) with their second paper. We then compare, by the same method, paper 2 with paper 3, when available, or the final exam. The EEs were observed, coded on a 3-point scale and analyzed using five indicators of decentering and perspective-taking skills: the identification of scientific methods and techniques (EE2), the quantity of moral dilemmas presented, the diversity of values presented, the quality of moral dilemmas presented (EE3), and the student’s decentered communication (EE4). The codebook is available in Appendix Table 1 . Double coding of the first and last papers was applied until a 78% intercoder agreement was reached, and simple coding was then applied for the other papers. Size effects (Cohen’s d ) were computed between the first and last papers.

The end-of-semester survey included open questions about students’ perception of their progression (comparing their first and last assignment); their approach towards scientific articles and popularized science; what they learned about the relations of science and society, about opinion building, and about refraining from giving their opinion; what they learned as they built moral dilemmas; what they learned about using cognitive empathy to approach SSIs and about distinguishing emotional and cognitive empathy; the design itself, its structure, the resources, and what they considered efficient; and if the learning was worth the effort. Many of the questions were used to improve the design over the years (DBR); however, a selection of responses relevant to this research will be presented and discussed. Footnote 2

We shall now present and discuss the proof-of-principle learning design that was then implemented in a class.

4.2 The Proof-of-Principle Learning Design

The first research question, RQ1, is a design question. It asks how a learning design that favors the development of scientific literacy and decentering competency can be implemented. The criteria for success are whether a reusable design can be defined, implemented, and evaluated. Below, we will present the key DEs implementing our theoretical CJs that could be used to attain the learning goals (EEs). The second research question (see Section 5 ) regards evaluating the effects in an implementation.

Using the CJ mapping design method described in Section 3 , we will now present the sample learning design as a detailed conjecture map connecting the theory to DEs, learning goals, and effects (Fig.  1 ). Each CJ is connected to one or more DE that in turn leads to EEs. EEs (learning outcomes) can be shared and observed through OEs, e.g., student-produced artifacts such as texts or papers produced during assignments. The latter two can be used by teachers to support the teaching process and by researchers to evaluate the design.

figure 1

Implementing the goals in a learning design. From CJs to DEs, EEs, and OEs: CJ map of the proof-of-principle design

CJ1 on scientific literacy was implemented as DE1.

DE1: Students write an individual paper according to a specific structure: an introduction; the techniques and methods used in the student-studied research; a list of their potential uses; and a table listing, for each use, the reasons why oneself or others might favor it in the form of opposing values (moral dilemmas). This DE is necessary to achieve EE1 (students identify the typical, structural elements of a scientific article, and communicate these elements). Three OEs (OE1, OE2, OE3) can be used to assess students’ scientific method literacy. In this study, OE2 and OE3 were scored between 1 (lowest) and 3 (highest) using the codebook in Appendix Table 1 . OE1 (text structure) was not evaluated.

CJ2 ( Sufficient understanding of the techniques and methods is needed to imagine the potential uses and limits of the student-studied research ) is implemented with DE2 and DE3 . First, students must learn about the method and then imagine possible uses of the research as well as different people’s emotional and moral reactions:

DE2 : Students read a popularized article, try to identify the methods, write a section in an individual paper, and refer to the original article if the information in the popularized article is not sufficient. The EEs are EE1, as above, and EE2 ( Students understand the techniques and methods presented in the scientific articles in order to imagine the potential uses and limits of scientific claims ). Students must grasp the essence of the methods to produce an explanation of the methods that can be used to imagine possible uses. Learners realize that scientific claims are limited by methods and that popularized articles generally do not clearly explain the methods or discuss their limits. OE1 (text structure and elements) and OE2 (summary of methods) are used as observables.

DE3 : Find or imagine a list of potential uses of the new methods and techniques—even some that might be offensive to oneself or to other people—and write a section in an individual paper. DE3 supports EE2 and EE3 ( Students imagine different moral reactions towards the possible uses of the techniques and methods presented in the article under study ). OE4 ( table of dilemmas ) includes several potential uses realistically linked to the methods and was scored between 1 (lowest) and 3 (highest) using the codebook in Appendix Table 1 .

Decentering competency is the perspective-taking ability to take some distance from one’s own emotional reactions to moral issues and to imagine and/or take into account other possible moral positions. It relies on two CJs: CJ3 and CJ4 . CJ3 ( an array of potential uses of the scientific techniques studied can set the scene for cognitive empathy ) is also implemented as DE3 ( imagine uses of techniques and methods ) and leads to the following expected and observable effects: EE3 (same as above), OE4 ( table of dilemmas includes a diversity of moral values ), and OE5 ( moral dilemmas involve truly opposing contradictory values ). The OEs are scored from 1 (lowest) to 3 (highest) using the codebook in Appendix Table 1 ). CJ4 focuses on decentering ( framing SSIs as evoking different emotional reactions and expressing them in terms of conflicting values without mentioning one’s own opinion can develop decentering skills ).

DE4 : Students must create a table with at least two opposing values or moral principles on each line, e.g., “improvement of well-being” vs. “natural course of illness” or “knowledge progress” vs. “religious values considering early embryos as human life.” Alternatively, students could be asked to present the conflicting emotional reactions that other people might have according to their different values and social contexts. DE4 supports EE4: students realize that their own reactions are not unique and are capable of considering other values to assess each potential use without expressing their own opinion (decentering). The related OEs are OE5 ( moral dilemmas involve truly opposing contradictory values ) and OE6 ( text uses decentered expression, no personal opinion, and balanced mention of other values) , which are scored between 1 (lowest) and 3 (highest) using the codebook in Appendix Table 1 .

4.3 Implementation of a Proof-of-Principle Learning Design

This abstract learning design was implemented in a classical information processing learning model (Joyce et al. 2000 ). The resulting learning design for the 2018/2019 class can be summarized in three phases, through which students produce (i) a description of methods (OE2), (ii) a list of potential uses (OE3), and (iii) a list of dilemmas (OE3, OE4) with opposing values (OE5) that uses decentered expression (OE6). A summary of the learning design that was implemented and studied is illustrated in Fig.  2 .

figure 2

Diagram of the main learning design elements (DEs), their expected effects (EEs), and observable effects (OEs)

For each of the three assignments, students were first given a popularized article on recent neuroscience research to read and were helped in class to understand the methods by identifying them in the original article from the primary literature (the student-studied research) in journals such as Nature , Science , and PNAS (DE1, DE2). Then, they were asked to use this understanding of the methods to elaborate a list of potential uses of these methods/techniques and discuss their plausibility, afterward creating a table relating each potential use to at least one moral dilemma between opposing moral principles. They had to produce (at home) a written text guided by a teacher-imposed structure:

Introduction

Methods and techniques: identify and describe the scientific methods and techniques used to obtain the results presented.

Potential uses: identify or imagine potential uses of these techniques and methods and evaluate their plausibility.

Moral dilemma: identify the moral dilemmas resulting from each of the potential uses and formulate them in terms of dilemmas (tensions between moral principles).

Students analyzed in detail three scientific articles for the written assignments. These artifacts were assessed and marked. The articles were as follows: (1) Tourbe ( 2004 ); original article: Wager et al. ( 2004 ). (2) Servan-Schreiber ( 2007 ); original article: Singer et al. ( 2004 ). (3) Peyrières ( 2008 ); original article: McClure et al. ( 2004 ). Another five articles were discussed only in the classroom, and the final exam was the fourth artifact. The exam was based on (4) Campus ( 2018 ); original article: Klimecki et al. ( 2018 ). For this class, the moral principles included benevolence, autonomy, equality, respect for life, pursuit of knowledge, and freedom of trade. They were empirically selected for their heuristic value, as the secondary students in this biology course did not have a strong background in philosophy, and the decentering goal required awareness of moral differences but not a very fine classification. Of course, other learning designs could use a different list tailored to the background of the students and goals of the curriculum. Students were required to produce a table that linked each potential use to a pair (or more) of conflicting reactions and moral values (a moral dilemma).

Over the course of the semester, feedback and assessment—at first focused mainly on scientific methods literacy—were progressively widened in scope to include potential uses and finally perspective-taking ability. In this proof-of-principle design, these assignments were graded using the OEs described above using what amounted to a simplified version of the rubric used for this research (see Appendix Table 1 ) and returned with written formative feedback highlighting specifically which items needed to be improved. Marks were improvement-weighted: progress was encouraged by a bonus on the next assignment when the items marked as wanting were improved on. This was inspired by knowledge improvement research (Scardamalia and Bereiter 2006 ) and was introduced as a strong incentive for students to improve . Through this iterative process, students were expected to gradually improve the selected skills and the texts produced. A final exam assessed the students’ skills acquired over the whole semester.

The methods, potential uses, and opposing moral principles in the form of dilemmas were first discussed in class. The focus was on instilling a sufficient understanding of the methods to allow students to find or imagine the potential uses—what different people might want to do using the techniques and methods of the student-studied research. This was done using a structured teacher-driven interactive discussion that guided students to find the methods in the primary article (OE2) and to understand them, with assistance for translation into French when needed. A few examples will illustrate how a proper understanding of the methods and their potential uses is required to imagine other people’s reactions. Understanding the methods is also necessary to see the limits of the research under study. Students had to discuss how realistic each potential use was, either based on the final section of the original article (the perspectives) or imagined by the students. This discussion of methods and possible uses naturally brought up the issue of the limits of fMRI imaging and the risks of neuroenchantment (Ali et al. 2014 ). Since the popularized article generally ignored the methods or simplified them to the point of omitting all reference to the degree of uncertainty and the limits of the claims that define scientific knowledge, students initially believed that the research under study produced claims that were definitive and “scientifically proven.” The comparison of popularized and original research very clearly highlighted some of the popularization issues Illes and Racine ( 2005 ) raised. For example, where Wager et al. ( 2004 ) cautiously conclude, “Although the results do not provide definitive evidence for a causal role of PFC in placebo, they were predicted by and are consistent with the hypothesis that PFC activation reflects a form of externally elicited top-down control that modulates the experience of pain” (p. 1167), the popularized neuroscience article that the students started with (Tourbe 2004 ) claimed that this research “proves that placebo reduces pain” (p. 26, our translation). This definitive claim is far from the prudently worded conclusion of the original article. Only a good understanding of the methods in the original article could lead to an understanding of the specific characteristics of how science validates knowledge. Reading of methods involving many control conditions and randomization brought up discussions in which students could discover essential concepts such as ceteris paribus, dependent and independent variables, and ruling out alternative explanations. While this was not the main educational goal of this proof-of-principle design, it might have helped develop students’ perspective on the nature of scientific knowledge (NOS). In fact, the claim by the popularizing journalist that this research “proves that placebo reduces pain” is not at all related to the research question of Wager et al. ( 2004 ), who attempted to explore which of three hypothesized neural mechanisms causes the placebo effect. The difference was used in the proof-of-principle design to bring up a fundamental issue, as the journalist concludes that placebo is “not only a simple psychological effect,” implying a dualistic view, while Wager et al. clearly adopt a monistic experimental paradigm (and probably view of the mind). This brought up a discussion about both possible views—quite in line with the decentering goal of this design—and students were encouraged to understand each statement in the context of the different implicit paradigms within which scientific authors and popularizing journalist work—whatever view they personally might have.

Additionally, students’ attention was drawn to the conflict of interest statement in the article by de Charms et al. ( 2005 ), which mentions that C. de Charms “has an ownership interest in Omneuron Inc. with pending patents on rtfMRI-based training methods.” This was not apparent until students read the original article. Then, students were encouraged to draft a list of potential uses (OE3) for further discussion in the form of moral dilemmas (OE4, OE5). For example, students imagined that the methods used by Wager et al. ( 2004 ) could be used to measure pain experience, to evaluate the efficiency of different pain-reducing therapies, to track down people cheating the healthcare system by pretending to have pain, or to assess the efficiency of torture methods by the military or terrorists.

Students were encouraged to plainly state the potential uses of new bioscientific methods and refrain from personal judgment. They were reminded that this course was not about deciding which opinion is best but about being able to listen to others and take other values, beliefs, and social contexts into account when formulating one’s own independent opinion. Some of these potential uses could cause strong emotional reactions, challenging the students’ own personal or collective interests. This highlights the educational goal for overcoming egocentric values: thinking independently (Jiménez-Aleixandre and Puig 2012 ). Emotional reactions were expressed by students but put into perspective as possible reactions stemming from their values, beliefs, and social and cultural belongings, thus emphasizing that others might see things otherwise. For example, when formulating dilemmas and discussing how a medical doctor might have to apply advance directives regarding end-of-life issues, one student insisted on strongly expressing her opinion that doctors must do all that they can to save the lives of patients—referring to the Hippocratic Oath. This opinion was received, and the emotional load it might carry was warmly acknowledged by the teacher. Then, in the class discussion, the fact that this was one possible reaction and that others might feel otherwise was accepted and examples were sought. The Children Act (McEwan 2014 ) was mentioned as an interesting avenue for exploring this dilemma.

The definition of opinion given by Astolfi ( 2008 ) was featured in the course description and referred to in classroom discussions. The moral dilemmas students produced while studying the Wager et al. ( 2004 ) example mentioned above—in line with the potential use “evaluate the efficiency of different pain-reducing therapies”—could involve benevolence (probable pain reduction) vs. respect for beliefs (not interfering with natural processes of health or divine intervention). Most student-studied research could lead to dilemmas such as pursuit of knowledge (better understanding of brain activities and processes) vs. loss of benevolence (money used in this research is not available elsewhere for other possible benevolent uses). The rather extreme example of assessing torture methods could lead to a dilemma of benevolence (freeing prisoners from terrorists) vs. malevolence (inflicting pain on humans).

It is worth noting in this case that though scientific literature arguing for the inefficiency of torture to obtain useful confessions (Starr 2019 ) was mentioned in this class, the teacher did not prevent such a dilemma from being posed, since some people might weigh more heavily the first arm of the dilemma than the second. This highlights how the decentering goal of this design is not an ethical discussion or rational debate to determine the best opinion but could well be used before various other CT learning activities. Having answered RQ1 by describing how we successfully implemented the general design CJs (Section 3 ) using a conjecture mapping technique (Section 4.2 ), let us now examine the empirical results to answer RQ2.

5 Results from the Proof-of-Principle Learning Design

5.1 results from student artifacts.

Does the learning design help students improve their scientific methods literacy and decentering abilities (RQ2)? As explained in Section 4.1 , we examined changes in artifacts produced by students (also called student productions or learner outputs in the literature), i.e., papers and written exams. Improvement in scientific methods literacy (EE2) was measured with OE2, i.e., identification of scientific methods and techniques in student artifacts. Decentering competency (EE3/EE4) was measured with four indicators: quantity of moral dilemmas (OE3), diversity of values (OE4), quality of moral dilemmas (OE5), and decentered communication (OE6).

The results for all the items indicate progress across the semester (Fig. 2 ). With N  = only 12, we computed the effect size (Cohen’s d between the first assignment paper and the text produced for the written exam), which measures the strength of a statistical claim, taking into account the progression (difference) as well as the uncertainty (standard deviation) in the data. For most scores, the effect size can be considered large (from d  = 1.29 to d  = 2.76), while the effect sizes for diversity of values ( d  = .38) and decentered communication ( d  = .86) qualify as good.

The scores for the identification of techniques and methods, used to measure scientific methods literacy (OE2), had improved by (+ 0.6 points) by the last iteration. Concerning the second part of RQ2—measures of decentering skills—the strongest progression (+ 1.25) was found for the quantity of moral dilemmas (OE3) proposed by the students. In most papers from the second assignment, several dilemmas in the form of “value vs. other value” were found, and the score remained generally stable in the final stage. The diversity of values proposed (OE4) moderately increased (+ 0.23), but the scores for the first paper had already achieved a high mean value (2.33); thus, there was little margin for improvement. The second-highest progression (+ 0.91) was found for the quality of moral dilemmas, which measures the ability to present dilemmas as contradicting values in a symmetrical way (OE5). Decentered communication abilities (OE6) showed little progression (+ 0.33) but the highest initial value ( M  = 2.50).

In addition, the final examination (the fourth student artifact produced) was aligned with the official curriculum.

5.2 Student Perceptions: Results from an End-of-Semester Survey

Additional insights for answering RQ2 can be drawn from a selection of responses to the end-of-semester questionnaire (2019 cohort, N  = 13, responses translated from French) concerning the students’ perceptions of their CT skills (decentering and scientific methods literacy) and, to some extent, their CT attitudes.

Overall, decentering skills (EE4) were the skills most frequently mentioned by students as acquired (21 mentions), Footnote 3 expressed in statements such as (our translation)

I am more objective
I take a step away from my own opinion
I am more open-minded towards different possible points of view, be it my opinion or not

Concerning EE3 and EE4, asking students about their perceptions of moral dilemmas elicited responses that included 7 mentions related to learning to step back and take a different look at one’s own opinion and to take more into account the point of view of others or different points of view, expressed as follows (our translation):

The discussion of the use of research through moral dilemmas helped me a lot to realize that several opinions could be considered. It is not just if an opinion can be accepted, but it all depends on the point of view
I think I have learned to explain points of view that are contrary to mine rather than "feeling" them more intuitively
…to better see the vision of others even if I do not necessarily share it, and therefore to take a step back .…

Most students (10 fully and 3 partly, N  = 13) considered that they had attained the learning objective “Being able to distinguish the issues of a scientific question in the form of moral dilemmas.”

More than half (8) of the students mentioned that emotions and empathy played a role in imagining or assessing potential situations, expressed as follows (our translation):

For me, cognitive empathy played a major role in the choice of dilemmas, because, I tried my best to put myself on each side of opinions in order to be as objective as possible, without feeling emotional empathy
My empathy probably biased my judgment of potential uses, but I don't think I let it show in my work
I think I can tell them apart. My emotional empathy is the first that arrives, and my cognitive empathy comes to take a step back before making a judgment

Concerning EE2 (scientific methods literacy), a large majority of students considered they had changed the way they formed opinions about progress in science during this module (11, N  = 13). The skills most often mentioned included learning to be wary of popularized articles (16 mentions), thinking more critically about scientific information (8), and developing the habit of referring to original scientific articles (8). Many mentioned being better able to understand and/or explain the methods and results of scientific research (7).

6 Discussion

This exploratory study develops a new conceptualization and a learning design method for developing a few specific CT skills useful for discussing SSIs raised by popularized (neuro)science. The goal of this educational research was to extract theoretical conjectures from recent research on CT education and the effects of emotions, decentering, and empathy and test their generativity in producing workable designs in which the acquisition of desired CT skills (decentering, methods literacy) can be observed through traces. In short, we presented guidelines for creating learning designs, and we tested a proof-of-principle design implemented in a class.

The results from this 2018/2019 implementation show that students were able to propose a diversity of moral principles (mostly found in the resources proposed for the course) in the first assignment—early in the semester—and their texts also show signs of moderately good decentering skills. However, the most progress seems to occur in the structuration of these values into full-fledged moral dilemmas: moral principle A vs. moral principle B. In the first paper, moral principles were often written in a disorganized way, while in paper 2, they were more frequently proposed in the form of dilemmas. We propose that this improved structuration reflects an improved ability to conceptually organize conflicting values without judgment into symmetrical pairs of opposites, which requires restraining one’s opinions and is indicative of a good decentering ability.

These results also tentatively confirm the value of iterating essentially the same activity in this design. Contrary to the advice frequently given to teachers to use varying types of tasks, repeated assignments involving the same task but different topics, guided by precise feedback as well as incentive-based grading, helped learners significantly improve the targeted high-level skills, i.e., scientific methods literacy and decentering abilities, as measured by increased OE scores on the texts produced by students (Section 5.1 ). A design based on a single assignment would probably not give students sufficient time and opportunity to learn these specific difficult skills.

The central choice to not debate opinions, with students expressly instructed to refrain from expressing their personal opinions on the SSIs under study, appears to have been perceived as effective (13 mentions in the end-of-semester survey) but was also a challenge for some of the students:

I found [not giving my opinion] difficult, as our opinion is the best, we tend to want to express it and share it. However, staying neutral and discussing all imaginable opinions of a situation is a task I [ultimately] enjoyed doing (our translation).

It would be methodologically problematic to fuse data obtained from previous cohorts in an evolving design, but we would like to mention that previous questionnaires Footnote 4 yielded similar results on these points.

Taken together, the results from the students’ artifacts and the survey tentatively suggest that engaging learners in the described learning activities produced a shift in students’ epistemology, from a naïve epistemology that knowledge is either true or false and that truths come from recognized authority (Bromme et al. 2010 ) towards a more sophisticated one. Learners developed independent opinions and moved from mostly emotionally empathetic reactions to a more decentered (cognitive) empathy when forming opinions about neuroscience SSIs. The increase in scientific methods literacy (see Fig. 3 ) and the final questionnaire responses mentioning the importance of reading original articles or understanding the methods, taken together, suggest a more critical appraisal of popularized scientific information.

figure 3

Average scores ( M ) in the proof-of-principle learning design for scientific methods literacy and methods (OE2) and decentering (OE3–6). Also shown: the standard deviation and the effect size (Cohen’s d between first and last), in white on the bars

Let us recall our theoretical tenants: emotions play an important role in opinion building, particularly when contradicting moral principles are involved. We also distinguish between emotional empathy and cognitive empathy. The latter allows for a more distant and balanced appraisal of situations and can result in positive feelings of care and prosocial motivation. Overall, research shows that cognitive and emotional systems are complex and concurrent, and the possibility that emotional and cognitive empathy could be separate processes opens the important possibility that they can be trained separately.

This new conceptualization based on developing cognitive empathy and balancing emotion with reason to enhance decentering in opinion building regarding new SSIs—described in Section 2 —is the main theoretical outcome of this research. We propose that it offers a new perspective that could be used as a preliminary step to enhance many CT learning designs. The second outcome (answering RQ1) is the development of a design and analysis method based on conjecture mapping (Section 3 ) that guides the translation of theory into practical learning designs. This design method showed its effectiveness by producing, according to design-based research principles, successive workable learning designs that could be improved to develop scientific literacy and decentering competency in a typical classroom. The related empirical outcome associated with RQ2 is a proof-of-principle design in which students’ written artifacts could be analyzed. It is described in Section 4 and discussed in Section 5 . It has been iteratively implemented, analyzed, and optimized over many years.

Cognitive empathy, though crucial for decentering, is not generally developed in schools, but our results suggest it can be taught. Having to identify conflicting moral principles seems to have helped the learners realize that contradictory positions about neuroscience SSIs do exist, could be valid, and should be taken into account in their opinion building process. Traces in the assignments and exams suggest that this important step towards balancing emotion and reason in discussing neuroscience SSIs was achieved. Our results do not prove the development of important intermediates such as cognitive empathy or the control of emotional empathy, but taken together, they do suggest that the design method can produce designs that contribute to this educational goal of independent opinion building. The results tentatively confirm that addressing the emotions evoked by SSIs can be an early step towards CT, not just the ultimate level of CT (De Vecchi 2006 ) requiring a degree of emotional control rarely achieved except by expert debaters (Legg 2018 ). They offer reasonable evidence that this new conceptualization of CT—based on recent research that cognitive empathy can be trained separately—can be used to inform workable designs that produce interesting results related to the decentering and scientific literacy skills identified and selected in this study.

7 Conclusions and Discussion

Within the large array of CT designs, this new conceptualization offers a novel perspective on addressing the numerous biases and difficulties that emotions can induce. The outcomes we present could be of use (i) for researchers (new conceptualization), (ii) for educational designers (CJ mapping), and (iii) to inspire teachers and educational designers (proof-of-principle design).

Giving students a good understanding of methods (scientific methods literacy) can empower them to see through much of the hype and overinterpretation of popularized science, as exemplified in neuroenchantment. This focus on scientific methods is rare (Kampourakis et al. 2014 ) and aims to help students assess the limits and potential uses of scientific claims before addressing SSIs. It can also help students understand how knowledge is validated in scientific articles. On this solid rational basis, the approach presented here takes the unusual route of developing decentering skills for discussing SSIs by letting students imagine people and their emotional reactions in the new situations that could result from neuroscience research. By refraining from debating formed opinions , which has been shown to limit the full potential of many designs for CT education, and instead discussing diverse possible emotional reactions in the form of moral dilemmas, this design attempts to circumvent many of the problems of classroom debates and could prepare students for the reasonable reflective thinking that defines CT (Ennis 1987 ). This approach is founded on the idea that cognitive empathy can be developed without reinforcing emotional empathy. It is an attempt to help students take their own and others’ emotions into account in a reasonable way (decentering in the sense of Klimecki and Singer ( 2013 )) and reconcile emotions and reason. It could be seen as an approach for fostering emotive reasoning (Sadler and Zeidler 2005 ).

We have argued that learning to take into account different, contradictory reactions to SSIs by other people (with different values, social contexts, and beliefs) and developing cognitive empathy for the emotional reactions of other while refraining from emotional empathy can be foundational in the process of building independent opinions (Jiménez-Aleixandre and Puig 2012 ) by helping students take into account and learn to manage others’ and their own emotional reactions (decentering skills). The proposed design method translates this theory into educational guidelines in the form of conjectures, design elements, expected effects, and observable effects that have been implemented and analyzed. The analysis of student artifacts about recent popularized and original neuroscience research suggests that this conceptualization focused on scientific methods literacy and cognitive empathy can be used to effectively develop decentering skills as measured by the observed effects. It does not prove that these students are better in all dimensions of CT but confirms the validity of exploring this approach.

From a research perspective, the proof-of-principle design could not be compared with designs considered standards or references, since this conceptualization breaks new research ground. We have discussed how the DBR research paradigm (e.g., Collins et al. 2004 ) differs from the experimental paradigm and argued that it is particularly relevant for exploring innovative designs addressing new educational challenges. The first student paper analyzed—at the very beginning of the semester—delivers much of the information expected from a pretest, as it tests students’ skills before the semester-long intervention. The final exam—while designed from a certificative assessment perspective—can be considered delivering some of the information of a posttest. Setting up a quasi-experimental control group design would be too difficult, since there are too many design variables to manipulate and the number of students available is insufficient. However, our results are evidence that this design is worth investigating in larger educational setups. Additionally, some results, such as the marked progression in the quantity and quality of moral dilemmas, might be explained by the fact that students did not fully understand the instructions at the beginning or took time to adjust to new expectations and therefore adjusted the content and structure of their second paper. While the analysis of student artifacts during this semester-long design indicates progress, suggesting that students developed CT skills EE1–4 with respect to recent neuroscience SSIs, we have no data about the long-term effects on independent opinion building and CT (no follow-up survey) or about the possible influence these effects might have on their future decisions. We fully agree with the need for developing dispositions towards CT (Ennis 1987 ; Facione 1990 ; Jiménez-Aleixandre and Puig 2012 ). We did collect some evidence that students demonstrate selected CT skills in their papers and exams, but without data about the actual behavior of students outside of and after this course, caution is required in drawing conclusions about possible changes in terms of CT dispositions .

Another limitation that requires discussion is the fact that the teacher is also one of the researchers, a classical validity-related concern. We would like to stress that widely recognized authors such as Schön ( 1983 ) have demonstrated the richness and relevance of the “reflective practitioner” approach, particularly for education research seen as design-based (Goodyear 2015 ). DBR and action research (Greenwood and Levin 1998 ) often rely on this implication to increase the relevance of the outcomes. It is possible that this reflective subjectivity is more relevant to this type of exploratory research than attempted objectivity. It is worth noting that the data coding and analysis were based on written artifacts rather than teacher reporting and that the data were (double-) coded by other researchers not involved in the teaching process.

For educational designers and teachers, the limited set of skills selected does not imply that this design develops the full set of CT skills mentioned by Ennis and Facione; rather, we propose that some design elements could be integrated into and contribute to many existing and well-tested designs that aim for CT. The limited number of participants requires caution as to the generalizability of the proof-of-principle design (RQ1). Indeed, the results for RQ2 are based on only 13 students and should be seen mainly as reasonable evidence that this conceptualization can produce effective designs and that the design method can produce workable designs that can be implemented, analyzed, discussed, and optimized.

DBR addresses new educational challenges by refining and testing models that can be deployed in other contexts, and each new iteration is an extension of the theory (Barab 2006 ). Thus, rather than a specific design that teachers might adopt or reject, this design approach and the proposed conjectures in Section 3 can be used to create many learning designs for different curricular and cultural contexts or educational levels. The proposed principles-based design method can guide the design or adaptation of many learning environments for teaching delicate subjects. While this approach has been developed and tested in the context of SSIs raised by popularized neuroscience, the generativity of the design method is not restricted to this subject area and could be applied in many existing or future areas of bioscience in which progress is raising new SSIs and possibly to the more classic SSIs raised by GMOs or climate change. Introductory learning activities based on our design conjectures or inspired by the sample design could be used to develop decentering skills before engaging students in more challenging learning tasks, such as argumentation about SSIs. We propose that this design could contribute foundationally to enhance many of the excellent designs for teaching the CT skills needed by future citizens. For example, a classical problem with debating is that the debate revolves not around the value of the arguments but the personal sympathy or dislike felt towards those presenting their points (i.e., relational rather than epistemic resolution of conflict (Buchs et al. 2004 ). A preliminary intervention developing decentering skills might help students learn to take into account other points of view. It might be worth exploring whether this enhances the notable designs for argumentation in the classroom using strategies such as listening triads, argument lines, and jigsaw groups, which produced very disappointing results in Osborne et al.’s study (2013).

Taking into account the different forms that empathy can take and their influences on learning processes opens new avenues for research, not only about SSIs but possibly also in other areas where emotional reactions interfere with learning processes. For example, designs could be studied that introduce the immunological mechanisms of vaccination via an adapted form of this decentering approach, e.g., discussing—without personal opinions—various possible emotional reactions stemming from values, social belongings, and beliefs as respectable but as separate from the instructional goals. After such an introduction, instruction focused on using scientific models to explain or predict situations that are meaningful to the students might be more acceptable to many of them. This decentering educational approach could also support conceptual change. For example, Coley and Tanner ( 2015 ) show how anthropocentric thinking (among others) causes the persistence of many scientifically inaccurate ideas, often termed misconceptions. It might well be that the empathy elicited towards some scientific concepts interferes with student understanding. For example, discussing invasive species in the context of ecology in multicultural classes could elicit opposing emotional empathy responses from students of migrant origin and others with strong political views, which might hinder scientific understanding. It would be worth testing if such a problem could be headed off by a short sequence developing cognitive empathy through this decentering approach.

We have shown how this approach—firmly based on scientific methods literacy—brings up NOS questions such as how the claims have been established, why this question is addressed, and who is involved in the research, questions that are too often ignored in science education focused on definitive knowledge. Didactic transposition theory (Chevallard 1991 ) shows how difficult it is to escape this transformation of classroom knowledge. However, our results are in line with Hoskins et al. ( 2007 ), suggesting that it is possible to guide students to the primary literature and to discuss how scientific knowledge is validated, as many have called for, e.g., Abd-El-Khalick ( 2013 ). More research is needed to assess whether the decentering approach we propose might help classes discuss the NOS without the debate becoming biased or shaped by dogmatic positions such as pro-science or anti-science (as discussed in Section 4.2 with the article by deCharms et al. ( 2005 )).

The generalizability of this approach could be limited by the social acceptability of some of the CT dimensions it develops. For example, challenging collective interests and values (Jiménez-Aleixandre and Puig 2012 ) could be problematic in some schools. Since this design encourages students to imagine various people’s reactions based on their values and beliefs, schools and teachers must be able to accept students mentioning potential uses that could strongly conflict with their own personal or collective interests and values. This approach also requires teachers to have good decentering skills. Furthermore, frequent reference to primary literature and recent research techniques is a stimulating but challenging perspective that many teachers nevertheless learn to appreciate (as scientific literature is now easily accessible through the internet) (Lombard, Schneider & Weiss 2020 ).

Globally, this research suggests that applying this learning design approach for CT, which is focused on developing cognitive empathy during the processes of opinion building, could improve rational debate and contribute to CT teaching. Since it involves addressing challenging new problems, fosters authenticity (Lombard 2011 ), and can be adapted to local constraints and opportunities, it may be of interest to many teachers who struggle with teaching SSIs.

Author 1, also a lecturer and teacher trainer at anonymized university—see Section 6 for a discussion of how this dual researcher/practitioner role was taken into account when analyzing the data.

Full responses are available (in French) at this URL: http://tecfa.unige.ch/perso/lombardf/calvin/4OC/4OC_2018_Questionnaire_dvaluation_par_les_elves_en_fin_de_module.pdf )

The numbers in parenthesis are the count of mentions of this skill across all questions in the questionnaire; this value can exceed the number of students.

Available on request

Abd-El-Khalick, F. (2013). Teaching with and about nature of science, and science teacher knowledge domains. Science & Education, 22 (9), 2087–2107. https://doi.org/10.1007/s11191-012-9520-2 .

Article   Google Scholar  

Ali, S. S., Lifshitz, M., & Raz, A. (2014). Empirical neuroenchantment: from reading minds to thinking critically. Frontiers in Human Neuroscience, 8 . https://doi.org/10.3389/fnhum.2014.00357 .

Angermuller, J. (2018). Truth after post-truth: for a strong programme in discourse studies. Palgrave Communications, 4 (1), 30. https://doi.org/10.1057/s41599-018-0080-1 .

Aronson, E., Wilson, T. D., & Akert, R. M. (2013). Social psychology (8th). Pearson.

Astolfi, J.-P. (2008). La saveur des savoirs. Disciplines et plaisir d’apprendre . Paris: Presses universitaires de France.

Google Scholar  

Barab, S. (2006). Design-based research, a methodological toolkit for the learning scientist. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 153–169). New York: Cambridge University Press.

Bavel, J. J. V., & Pereira, A. (2018). The partisan brain: an identity-based model of political belief. Trends in Cognitive Sciences, 22 (3), 213–224. https://doi.org/10.1016/j.tics.2018.01.004 .

Berland, L. K., & McNeill, K. L. (2010). A learning progression for scientific argumentation: understanding student work and designing supportive instructional contexts. Science Education, 94 (5), 765–793. https://doi.org/10.1002/sce.20402 .

Bloom, P. (2017a). Against empathy: the case for rational compassion . London: Penguin Random House.

Bloom, P. (2017b). Empathy and its discontents. Trends in Cognitive Sciences, 21 (1), 24–31. https://doi.org/10.1016/j.tics.2016.11.004 .

Bowell, T. (2018). Changing the world one premise at a time: argument, imagination and post-truth. In M. A. Peters, S. Rider, M. Hyvönen, & T. Besley (Eds.), Post-Truth, Fake News (pp. 169–185). Springer Singapore. https://doi.org/10.1007/978-981-10-8013-5_15 .

Bromme, R., Pieschl, S., & Stahl, E. (2010). Epistemological beliefs are standards for adaptive learning: a functional theory about epistemological beliefs and metacognition. Metacognition and Learning, 5 (1), 7–26. https://doi.org/10.1007/s11409-009-9053-5 .

Brookhart, S., Moss, C., & Long, B. (2008). Formative assessment. Educational Leadership, 66 (3), 52–57.

Brossard, D., & Scheufele, D. A. (2013). Science, new media, and the public. Science, 339 (6115), 40–41. https://doi.org/10.1126/science.1232329 .

Brown, A. L. (1992). Design experiments: theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2 (2), 141–178.

Buchs, C., Butera, F., Mugny, G., & Darnon, C. (2004). Conflict elaboration and cognitive outcomes. Theory Into Practice, 43 (1), 23–30.

Call, J., & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12 (5), 187–192. https://doi.org/10.1016/j.tics.2008.02.010 .

Campus. (2018). Le cerveau dispose d'un mécanisme capable de couper l'envie de se venger. Campus, 134 , 9.

Chang, L., & Tsao, D. Y. (2017). The code for facial identity in the primate brain. Cell, 169 (6), 1013–1028.e14. https://doi.org/10.1016/j.cell.2017.05.011 .

Check, E. (2005). Ethicists urge caution over emotive power of brain scans [news]. Nature, 435 , 254–255. https://doi.org/10.1038/435254a .

Chevallard, Y. (1991). La transposition didactique – Du savoir savant au savoir enseigné . Grenoble: La pensée sauvage.

CIIP. (2011). Plan d’études Romand . Neuchâtel: Conférence intercantonale de l’instruction publique de la Suisse Romande et du Tessin https://www.plandetudes.ch . Accessed 25 Aug 2020.

Clark, D. B., & Linn, M. C. (2013). The knowledge integration perspective: connections across research and education. In S. Vosniadou (Ed.), International handbook of research on conceptual change (pp. 520–538). Taylor & Francis.

Clément, P., & Quessada, M.P. (2013). Les conceptions sur l’évolution biologique d’enseignants du primaire et du secondaire dans 28 pays varient selon leur pays et selon leur niveau d’étude. Actualité de la Recherche en Éducation et Formation , Aug 2013, Montpellier, France. 19 p. hal-01026095.

Coley, J. D., & Tanner, K. (2015). Relations between intuitive biological thinking and biological misconceptions in biology majors and nonmajors. CBE-Life Sciences Education, 14 (1), ar8. https://doi.org/10.1187/cbe.14-06-0094 .

Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences, 13 (1), 15–42.

Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS One, 12 (5), e0175799. https://doi.org/10.1371/journal.pone.0175799 .

Dawson, V., & Carson, K. (2018). Introducing argumentation about climate change socioscientific issues in a disadvantaged school. Research in Science Education , 1–21. https://doi.org/10.1007/s11165-018-9715-x .

Dawson, V. M., & Venville, G. (2010). Teaching strategies for developing students’ argumentation skills about Socioscientific issues in high school genetics. Research in Science Education, 40 (2), 133–148. https://doi.org/10.1007/s11165-008-9104-y .

De Vecchi, G. (2006). Enseigner l’expérimental en classe : pour une véritable éducation scientifique . Paris: Hachette éducation.

Decety, J., & Cowell, J. M. (2014). The complex relation between morality and empathy. Trends in Cognitive Sciences, 18 (7), 337–339. https://doi.org/10.1016/j.tics.2014.04.008 .

deCharms, R. C., Maeda, F., Glover, G. H., Ludlow, D., Pauly, J. M., Soneji, D., & Mackey, S. C. (2005). Control over brain activation and pain learned by using real-time functional MRI. Proceedings of the National Academy of Sciences of the United States of America, 102 (51), 18626–18631. https://doi.org/10.1073/pnas.0505210102 .

diSessa, A. (2002). Why “conceptual ecology” is a good idea. In M. Limón & L. Mason (Eds.), Reconsidering Conceptual Change: Issues in Theory and Practice (pp. 28–60). Springer Netherlands.

Duit, R., Treagust, D. F., & Widodo, A. (2008). Teaching science for conceptual change: theory and practice. In S. Vosniadou (Ed.), International handbook of research on conceptual change (pp. 629–646).

Duschl, R. A., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Studies in Science Education, 38 (1), 39–72. https://doi.org/10.1080/03057260208560187 .

Ennis, R. H. (1987). A taxonomy of critical thinking dispositions and abilities. In J. B. Baron & R. J. Sternberg (Eds.), Teaching thinking skills : Theory and practice (pp. 9–26). W H Freeman/Times Books/ Henry Holt & Co..

Facione, P. (1990). Critical thinking : A statement of expert consensus for purposes of educational assessment and instruction (the Delphi report).

Falk, H., Brill, G., & Yarden, A. (2008). Teaching a biotechnology curriculum based on adapted primary literature. International Journal of Science Education, 30 (14), 1841–1866.

Fenichel, M., & Schweingruber, H. A. (2010). Surrounded by science: learning science in informal environments . Washington: National Academy Press.

Festinger, L. (1957). A theory of cognitive dissonance . Stanford University Press.

Fisher, M., Knobe, J., Strickland, B., & Keil, F. C. (2018). Vous avez dit débat constructif ? Cerveau et Psycho , 78–82.

Forgas, J. P. (2013). Don’t worry, be sad! On the cognitive, motivational, and interpersonal benefits of negative mood. Current Directions in Psychological Science, 22 (3), 225–232. https://doi.org/10.1177/0963721412474458 .

Fredrickson, B. (2004). The broaden–and–build theory of positive emotions. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 359 (1449), 1367–1377. https://doi.org/10.1098/rstb.2004.1512 .

Goldstein, E. B. (2018). Cognitive psychology: Connecting mind, research, and everyday experience (5th). Cengage Learning. https://doi.org/10.1002/sce.21086 .

Goodyear, P. (2015). Teaching as design. HERDSA Review of Higher Education, 2 , 27–50 http://www.herdsa.org.au/herdsa-review-higher-education-vol-2/27-50 . Accessed 25 Aug 2020.

Greenwood, D. J., & Levin, M. (1998). Action research, science, and the co-optation of social research. Studies in Cultures, Organizations and Societies, 4 , 237–261.

Gruber, J., Mauss, I. B., & Tamir, M. (2011). A dark side of happiness? How, when, and why happiness is not always good. Perspectives on Psychological Science, 6 (3), 222–233. https://doi.org/10.1177/1745691611406927 .

Hand, B., & Prain, V. (2001). Teachers implementing writing-to-learn strategies in junior secondary science: a case study. Science Education, 86 (6), 737–755. https://doi.org/10.1002/sce.10016 .

Hoskins, S. G., Stevens, L. M., & Nehm, R. H. (2007). Selective use of the primary literature transforms the classroom into a virtual laboratory. Genetics, 176 (3), 1381–1389.

Hounsell, D., & McCune, V. (2002). Teaching-learning environments in undergraduate biology: initial perspectives and findings . Edinburgh: Economic & Social Research Council, Department of Higher and Community Education.

Illes, J., & Racine, E. (2005). Imaging or imagining? A neuroethics challenge informed by genetics. The American Journal of Bioethics, 5 ( 2 ), 5–18. https://doi.org/10.1080/15265160590923358 .

Jenkins, A. C., Macrae, C. N., & Mitchell, J. P. (2008). Repetition suppression of ventromedial prefrontal activity during judgments of self and others. PNAS, 105 (11), 4507–4512. https://doi.org/10.1073/pnas.0708785105 .

Jiménez-Aleixandre, M. P., & Puig, B. (2012). Argumentation, evidence evaluation and critical thinking. In B. J. Fraser, K. Tobin, & C. J. McRobbie (Eds.), Second International Handbook of Science Education (pp. 1001–1015). https://doi.org/10.1007/978-1-4020-9041-7_66 .

Chapter   Google Scholar  

Jiménez-Aleixandre, M. P., Rodríguez, A. B., & Duschl, R. A. (2000). “Doing the lesson” or “doing science”: argument in high school genetics. Science Education, 84 (6), 757–792. https://doi.org/10.1002/1098-237X(200011)84:6<757::AID-SCE5>3.0.CO;2-F .

Johnson, D. W., & Johnson, R. T. (2009). Energizing learning: the instructional power of conflict. Educational Researcher, 38 (1), 37. https://doi.org/10.3102/0013189X08330540 .

Jonassen, D. H., & Kim, B. (2010). Arguing to learn and learning to argue: design justifications and guidelines. Educational Technology Research and Development, 58 (4), 439–457. https://doi.org/10.1007/s11423-009-9143-8 .

Joyce, B. R., Weil, M., & Calhoun, E. (2000). Models of teaching (6th.) . Needham Heights: Allyn & Abacon.

Kampourakis, K., Reydon, T. A. C., Patrinos, G. P., & Strasser, B. J. (2014). Genetics and society—educating scientifically literate citizens: introduction to the thematic issue. Science & Education, 23 (2), 251–258. https://doi.org/10.1007/s11191-013-9659-5 .

Klimecki, O. M., & Singer, T. (2013). Empathy from the perspective of social neuroscience. In J. Armony & P. Vuilleumier (Eds.), The Cambridge Handbook of Human Affective Neuroscience (pp. 533–550). https://doi.org/10.1017/CBO9780511843716.029 .

Klimecki, O. M., Sander, D., & Vuilleumier, P. (2018). Distinct brain areas involved in anger versus punishment during social interactions. Scientific Reports, 8 (1), 10556. https://doi.org/10.1038/s41598-018-28863-3 .

Kuhn, T. S. (1962). The structure of scientific revolutions (1st ed.). Chicago: Chicago University Press.

Legg, C. (2018). The solution to poor opinions is more opinions: Peircean pragmatist tactics for the epistemic long game. In M. A. Peters, S. Rider, M. Hyvönen, & T. Besley (Eds.), Post-Truth, Fake News (pp. 43–58). Springer Singapore. https://doi.org/10.1007/978-981-10-8013-5_4 .

Lilensten, J. (2018). Les sens du mot science . Les Ulis: EDP sciences.

Lombard, F. (2011). New opportunities for authenticity in a world of changing biology In A. Yarden G. S. Carvalho (Eds.), Authenticity in Biology Education: Benefits and Challenges (pp. 15-26). Braga Portugal: Universidade do Minho. Centro de Investigação em Estudos da Criança (CIEC).

Lombard, F., & Weiss, L. (2018). Can Didactic Transposition and Popularization Explain Transformations of Genetic Knowledge from Research to Classroom? Science & Education . https://doi.org/10.1007/s11191-018-9977-8

Lombard, F., Merminod, M., Widmer, V., & Schneider, D. K. (2018). A method to reveal fine-grained and diverse conceptual progressions during learning. Journal of Biological Education, 52 (1), 101–112. https://doi.org/10.1080/00219266.2017.1405534

Lombard, F., Schneider, D.,K., Weiss, L., (2020) Jumping to science rather than popularizing: a reverse approach to update in-service teacher scientific knowledge, Progress in Science Education, 2020, Vol 3, https://doi.org/10.25321/prise.2020.1005

Lundegård, I., & Hamza, K. M. (2014). Putting the cart before the horse: the creation of essences out of processes in science education research. Science Education, 98 (1), 127–142.

McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: the effect of brain images on judgments of scientific reasoning. Cognition, 107 ( 1 ), 343–352. https://doi.org/10.1016/j.cognition.2007.07.017 .

McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural correlates of behavioral preference for culturally familiar drinks. Neuron, 44 (2), 379–387. https://doi.org/10.1016/j.neuron.2004.09.019 .

McEwan, I. (2014). The children act . Vintage Books.

Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: a methods sourcebook . London: SAGE.

Mor, Y., & Craft, B. (2012). Learning design: reflections on a snapshot of the current landscape. Research in Learning Technology, 20 , 85–94. https://doi.org/10.3402/rlt.v20i0.19196 .

Mor, Y., & Mogilevsky, O. (2013). The learning design studio: collaborative design inquiry as teachers’ professional development. Research in Learning Technology, 21 . https://doi.org/10.3402/rlt.v21i0.22054 .

Mottier Lopez, L. (2015). Évaluations formative et certificative des apprentissages : Enjeux pour l’enseignement. De Boeck.

Narvaez, D., & Vaydich, J. L. (2008). Moral development and behaviour under the spotlight of the neurobiological sciences. Journal of Moral Education, 37 (3), 289–312. https://doi.org/10.1080/03057240802227478 .

Nenciovici, L., Allaire-Duquette, G., & Masson, S. (2019). Brain activations associated with scientific reasoning: a literature review. Cognitive Processing, 20 (2), 139–161. https://doi.org/10.1007/s10339-018-0896-z .

Ohlsson, S. (2013). Beyond evidence-based belief formation: how normative ideas have constrained conceptual change research. Frontline Learning Research, 1 ( 2 ), 70–85. https://doi.org/10.14786/flr.v1i2.58 .

Osborne, J. (2010). Arguing to learn in science: the role of collaborative, critical discourse. Science, 328 (5977), 463–466. https://doi.org/10.1126/science.1183944 .

Osborne, J., Simon, S., Christodoulou, A., Howell-Richardson, C., & Richardson, K. (2013). Learning to argue: a study of four schools and their attempt to develop the use of argumentation as a common instructional practice and its impact on students. Journal of Research in Science Teaching, 50 (3), 315–347.

Peters, R. S. (2015). Authority, responsibility and education . Routledge. 1st: 1959.

Peyrières, C. (2008). Le paradoxe Pepsi-Coca. Science et Vie Junior Décembre, 2008 , 61.

Piaget, J. (1950). Introduction à l’épistémologie génétique. (II) La pensée physique . Paris: Presses Universitaires de France.

Plummer, J. D., & Krajcik, J. (2010). Building a learning progression for celestial motion: elementary levels from an earth-based perspective. Journal of Research in Science Teaching, 47 (7), 768–787. https://doi.org/10.1002/tea.20355 .

Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: toward a theory of conceptual change. Science Education, 66 (2), 211–227. https://doi.org/10.1002/sce.3730660207 .

Potvin, P. (2013). Proposition for improving the classical models of conceptual change based on neuroeducational evidence: conceptual prevalence. Neuroeducation, 2 ( 1 ), 16–43. https://doi.org/10.24046/neuroed.20130201.16 .

QAA. (2002). Subject benchmark statements : Biosciences . Cheltenham: Quality Assurance Agency for Higher Education.

Qiao-Tasserit, E., Corradi-Dell’Acqua, C., & Vuilleumier, P. (2018). The good, the bad, and the suffering. Transient emotional episodes modulate the neural circuits of pain and empathy. Neuropsychologia, 116 , 99–116. https://doi.org/10.1016/j.neuropsychologia.2017.12.027 .

Rider, S., & Peters, M. A. (2018). Post-truth, fake news: viral modernity and higher education. In M. A. Peters, S. Rider, M. Hyvönen, & T. Besley (Eds.), Post-Truth, Fake News (pp. 1–12). Springer Singapore. https://doi.org/10.1007/978-981-10-8013-5_1 .

Rowe, G., Hirsh, J. B., & Anderson, A. K. (2007). Positive affect increases the breadth of attentional selection. Proceedings of the National Academy of Sciences, 104 (1), 383–388. https://doi.org/10.1073/pnas.0605198104 .

Rowe, M. P., Gillespie, B. M., Harris, K. R., Koether, S. D., Shannon, L. J. Y., & Rose, L. A. (2015). Redesigning a general education science course to promote critical thinking. Cell Biology Education, 14 (3). https://doi.org/10.1187/cbe.15-02-0032 .

Sadler, T. D., & Zeidler, D. L. (2005). Patterns of informal reasoning in the context of socioscientific decision-making. Journal of Research in Science Teaching, 42 (1), 112–138. https://doi.org/10.1002/tea.20042 .

Sander, D., & Scherer, K. (2009). Traité de psychologie des émotions . Paris: Dunod.

Sandoval, W. A., & Bell, P. (2004). Design-based research methods for studying learning in context: introduction. Educational Psychologist, 39 (4), 199–201.

Scardamalia, M., & Bereiter, C. (2006). Knowledge building: theory, pedagogy, and technology. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 97–115). New York: Cambridge University Press.

Schleicher, A. (2019). PISA 2018 : Insights and Interpretations . OECD Publishing.

Schön, D. A. (1983). The reflective practitioner. How professionals think in action . New York: Basic Books.

Servan-Schreiber, D. (2007). La douleur de l'autre est en nous. Psychologies.com , 3 déc. 07.

Seyfarth, R. M., & Cheney, D. L. (2013). Affiliation, empathy, and the origins of theory of mind. Proceedings of the National Academy of Sciences of the United States of America, 110 (Suppl 2), 10349–10356. https://doi.org/10.1073/pnas.1301223110 .

Shamay-Tsoory, S. G., Aharon-Peretz, J., & Perry, D. (2009). Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain: A Journal of Neurology, 132 (Pt 3), 617–627. https://doi.org/10.1093/brain/awn279 .

Simonneaux, L. (2003). L’argumentation dans les débats en classe sur une technoscience controversée. Aster, 37 , 189–214.

Simonneaux, L., & Simonneaux, J. (2005). Argumentation sur des questions socio-scientifiques. Didaskalia, 27 , 79–108.

Sinatra, G. M., Southerland, S. A., McConaughy, F., & Demastes, J. W. (2003). Intentions and beliefs in students’ understanding and acceptance of biological evolution. Journal of Research in Science Teaching, 40 (5), 510–528. https://doi.org/10.1002/tea.10087 .

Singer, T., & Klimecki, O. M. (2014). Empathy and compassion. Current Biology, 24 (18), R875–R878. https://doi.org/10.1016/j.cub.2014.06.054 .

Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., & Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303 (5661), 1157–1162.

Starr, D. (2019). The confession. Science, 364 (6445), 1022–1026. https://doi.org/10.1126/science.364.6445.1022 .

Strike, K. A., & Posner, G. J. (1982). Conceptual change and science teaching. International Journal of Science Education, 4 (3), 231–240.

Tourbe, C. (2004). L'effet placebo diminue bien la douleur. Science et Vie , 1039, April 2004, p. 26.

Vollberg, M. C., & Cikara, M. (2018). The neuroscience of intergroup emotion. Current Opinion in Psychology, 24 , 48–52. https://doi.org/10.1016/j.copsyc.2018.05.003 .

Vosniadou, S. (1994). Capturing and modeling the process of conceptual change. Learning and Instruction, 4 (1), 45–69.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559 .

Wager, T. D., Rilling, J. K., Smith, E. E., Sokolik, A., Casey, K. L., Davidson, R. J., et al. (2004). Placebo-induced changes in fMRI in the anticipation and experience of pain. Science, 303 (5661), 1162–1167. https://doi.org/10.1126/science.1093065 .

Waight, N., & Abd-El-Khalick, F. (2011). From scientific practice to high school science classrooms: transfer of scientific technologies and realizations of authentic inquiry. Journal of Research in Science Teaching, 48 (1), 37–70.

Willingham, D. T. (2008). Critical thinking: why is it so hard to teach? Arts Education Policy Review, 109 ( 4 ), 21–32. https://doi.org/10.3200/AEPR.109.4.21-32 .

Yarden, A., Falk, H., Federico-Agraso, M., Jiménez-Aleixandre, M., Norris, S., & Phillips, L. (2009). Supporting teaching and learning using authentic scientific texts: a rejoinder to Danielle J. Ford. Research in Science Education, 39 (3), 391–395.

Young, L., & Koenigs, M. (2007). Investigating emotion in moral cognition: a review of evidence from functional neuroimaging and neuropsychology. British Medical Bulletin, 84 (1), 69–79. https://doi.org/10.1093/bmb/ldm031 .

Download references

Acknowledgments

We would like to thank Prof Mireille Bertancourt and the TECFA lab at Geneva University for its stimulating climate, Dr. Vincent Widmer for constructive comments and designing Fig. 2 , all the students involved in the course over many years for their constructive comments that helped the design evolve, Dr. Emilie Qiao for insightful comments and suggestions about neuroscience research, and Mattia Fritz for constructive comments.

Open access funding provided by Open access funding provided by University of Geneva.

Author information

Authors and affiliations.

TECFA, IUFE, University of Geneva, Geneva, Switzerland

François Lombard

TECFA, University of Geneva, Geneva, Switzerland

Daniel K. Schneider

IUFE, University of Geneva, Geneva, Switzerland

Marie Merminod & Laura Weiss

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to François Lombard .

Ethics declarations

Conflict of interest.

The author declare no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The following codebook was used to code the progression of selected critical thinking skills (EE2 to EE4). Each OE item was coded on a 3-point scale (see the performance measures column).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lombard, F., Schneider, D.K., Merminod, M. et al. Balancing Emotion and Reason to Develop Critical Thinking About Popularized Neurosciences. Sci & Educ 29 , 1139–1176 (2020). https://doi.org/10.1007/s11191-020-00154-2

Download citation

Published : 07 September 2020

Issue Date : October 2020

DOI : https://doi.org/10.1007/s11191-020-00154-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Socio-scientific issues
  • Emotion; debate
  • Critical thinking
  • Neuroscience
  • Educational design
  • Science education
  • Find a journal
  • Publish with us
  • Track your research
  • Partnerships

The development of the reasoning brain and how to foster logical reasoning skills

The development of the reasoning brain and how to foster logical reasoning skills

Early childhood development / Effective lifelong learning / Learning mathematics

Executive summary

Learning to reason logically is necessary for the growth of critical and scientific thinking in children. Yet, both psychological and neural evidence indicates that logical reasoning is hard even for educated adults. Here, we examine the factors that scaffold the emergence of logical reasoning in children. Evidence suggests that the development of reasoning with concrete information can be accounted for by the development of both world knowledge and self-regulation. The transition from concrete to abstract reasoning, however, is a challenge for children. Children’s development of reasoning may be supported by encouraging both divergent thinking and reasoning at levels of abstraction that are just above reasoners’ current levels, alongside activities in which children reason with others.

Introduction

It is often argued that one of the most fundamental goals of education is to nurture critical thinking, that is, to teach children to employ good reasoning skills when developing their beliefs. Therefore, fostering logical reasoning should be an important goal for education: Children should learn to provide logical reasons for their opinions and should be able to distinguish between good and bad arguments. This is likely to be important for their effective exercise of citizenship as adults. For example, logical reasoning could tell you that it is unwarranted to conclude “All Muslims are terrorists” from the assertions “All the 9/11 perpetrators are Muslims” and “All the 9/11 perpetrators are terrorists.” Yet, many educated adults still draw such a conclusion, most likely because fear and bias can overcome rational thinking. This suggests that logical reasoning is hard even for educated adults, a conclusion that is supported by a wealth of psychological studies. Perhaps the most striking demonstration of the difficulty of logical reasoning was discovered by the psychologist Peter Wason in 1966 1 . Wason designed a task in which he presented participants with four playing cards, each with a letter on one side and a number on the other side. For example, the cards could be as follow:

A         B         2          3

Participants were then shown the conditional rule “If a card has the letter A on one side, then it has the number 2 on the other side.” The task consisted of selecting those cards that had to be turned over to discover whether the rule was true or false. Since Wason’s study, that task has been performed many times, and the results are always the same. Most people select either the A card alone or sometimes both the cards A and 2. However, very few adults, even highly educated, typically choose the 3 card. This is despite the fact that discovering what is on the other side of the 3 card is necessary to evaluate whether the rule is true or false (i.e., if there is an A on the other side of the 3, the rule is false). This reasoning failure has puzzled psychologists for decades because it questions the long-standing assumption that human beings are inherently rational. Why is it so hard for participants to select the 3 card? Neuroscience research suggests that it is because it is much more difficult for the brain to focus on the elements that are absent from the rule (e.g., 3) than on the elements that are present (e.g., A) 2 . Thus, selecting the 3 card requires much more extensive brain activation in several brain regions (primarily involved in attention and concentration) to overcome that tendency (see Figure 1). So, how can we get people to activate more of their reasoning brain and act more rationally on this task? One of the first ideas that comes to mind would be to teach them logic. Cheng and colleagues 3 have tested this. The researchers presented the Wason selection task to college students before and after they took a whole-semester introductory class in logic (about 40 hours of lectures). Surprisingly, they found no difference in the students’ poor performance between the beginning and the end of the semester. In other words, a whole semester of learning about logic did not help students make any less error on the task! What, then, can train the reasoning brain? To answer that question, it is interesting to turn to what we know about the development of logical reasoning in children.

Figure 1. The reasoning brain. Location of the brain regions (in red, blue, and white) that are activated when participants reason with elements that are not present in the rule in the Wason card task. Activations are displayed on pictures of the brain taken using a magnetic resonance imaging scanner. (Reproduced from Ref. 2 )

The development of concrete logical reasoning in children

It is clear that even young children can use some logical reasoning when concrete information is involved. For instance, most 6-year-olds can draw the conclusion “The person is hurt” from the statements “If the person breaks his arm, the person will be hurt” and “The person breaks his arm.” However, the reasoning abilities of young children are limited. For example, many 6-year-olds would also draw the conclusion “The person broke his arm” from the statements “If the person breaks his arm, the person will be hurt” and “The person is hurt.” This, however, is an invalid conclusion because there may be many other reasons why a person could be hurt. Children will progressively understand this and will make this type of reasoning error less and less as they get older. By the time they reach the end of elementary school, most children are able to refrain from concluding “The person broke his arm” from the statements “If the person breaks his arm, the person will be hurt” and “The person is hurt” 4 . Critically, this increased reasoning ability is mirrored by an increase in the ability to think about alternate causes for a given consequence. For example, older children are much more able than younger children to think about the many other reasons why someone would be hurt, like getting sick, breaking a leg, cutting a finger, etc. In other words, better reasoning ability with age is associated with a better ability to consider alternatives from stored knowledge. Clearly, however, children differ in terms of what they know about the world. This predicts that those who have better world knowledge and can think about more alternatives should be better reasoners than the others. And this is exactly what has been shown in several studies 4 .

Interestingly, the importance of world knowledge for reasoning has a paradoxical effect: It can make children poorer reasoners on some occasions. For example, children who can think about a lot of alternatives would be less inclined to draw the logically valid conclusion “The person will be tired” from the statements “If a person goes to sleep late, then he will be tired” and “The person goes to sleep late.” This is because a child with significant world knowledge can think of several circumstances that would make the conclusion unwarranted, such as waking up later the next day. Thus, more world knowledge needs to be associated with more ability to suppress the alternatives that might come to mind if the task requires it. This self-regulation ability relies on a part of the brain that also massively develops during childhood, i.e., the prefrontal cortex (see Figure 2). Overall, then, the development of concrete logical reasoning in children can be largely accounted for by the development of both world knowledge and self-regulation skills that are associated with the frontal cortex.

Figure 2. The prefrontal cortex. Location of the prefrontal cortex on a 3D rendering of the human brain. Polygon data were generated by Database Center for Life Science(DBCLS),  distributed under a CC-BY-SA-2.1-jp license.

From concrete to abstract reasoning

There is, however, an important difference between the reasoning skills described above and the task developed by Peter Wason about the four cards. What we just described relates to reasoning with very concrete information, whereas the card task involves reasoning with purely abstract information. Abstract reasoning is difficult because it requires one to manipulate information without any referent in the real world. Knowledge is of no help. In fact, neuroscience research indicates that abstract and concrete reasoning rely on two different parts of the brain 5 (see Figure 3). The ability to reason logically with an abstract premise is generally only found during late adolescence 4 . Transitioning from concrete to abstract reasoning may require extensive practice with concrete reasoning. With mastery, children may extract from the reasoning process abstract strategies that could be applied to abstract information. A recent study, however, suggests a trick to help facilitate this transition in children 6 . The researchers discovered that abstract reasoning in 12- to 15-year-olds is much improved when these adolescents are previously engaged in a task in which they have to reason with information that is concrete but empirically false, such as “If a shirt is rubbed with mud, then the shirt will be clean.” No such effect was observed when adolescents are asked to reason with concrete information that is empirically true, such as “If a shirt is washed with detergent, then the shirt will be clean.” Therefore, reasoning with information that contradicts what we know about the world might constitute an intermediary step in transitioning from concrete to abstract reasoning.

Figure 3. Brain regions activated when reasoning with concrete (left) and abstract (right) information. Activations are displayed on pictures of the brain taken using a magnetic resonance imaging scanner. (Reproduced from Ref. 5 )

What can we do to foster logical reasoning skills?

What, then, can we do to help foster the development of logical reasoning skills in children? The research described above suggests several potentially fruitful ways. First, it is clear that the development of concrete reasoning—the very first type of reasoning children can engage in—relies on an increased ability to think about counter-examples for a given statement. This implies that knowledge about the world is critical to the emergence of logical reasoning in children, at least when concrete information is involved. Therefore, all activities that would expand such world knowledge (e.g., reading informational books, learning new vocabulary, exploring new environments and places) are likely to be beneficial to the development of children’s reasoning skills. Second, it is important to consider that the more world knowledge a child possesses, the more he/she will need to juggle with this knowledge. For example, generating counter-examples when solving a reasoning problem will require maintaining pieces of information in memory for a short period of time, a type of memory called working memory . World knowledge can also sometimes be detrimental to reasoning and needs to be inhibited , such as when recognizing that the conclusion “The person will be tired” logically follows from the statements “If a person goes to sleep late, then he will be tired” and “The person goes to sleep late” (even if one might think of several conditions that would make the conclusion untrue based on what we know about the world). Fostering these types of self-regulation skills (working memory and inhibition) should thus be beneficial to the development of logical reasoning. Several studies suggest that these functions could be promoted by targeting children’s emotional and social development, such as in curricula involving social pretend play (requiring children to act out of character and adjusting to improvisation of others), self-discipline, orderliness, and meditation exercises 7 . Studies also indicate positive effects of various physical activities emphasizing self-control and mindfulness, such as yoga or traditional martial arts 7 . Third, studies indicate that the transition from concrete to abstract reasoning occurring around adolescence is challenging. Although more research is needed in this domain, one promising way to help this transition is by encouraging children’s thinking about alternatives with content that contradicts what they know about the world (e.g., “If a shirt is rubbed with mud, then the shirt will be clean”). In sum, as stated by Henry Markovits, “the best way to encourage the development of more abstract ways of logical reasoning is to gradually encourage both divergent thinking and reasoning at levels of abstraction that are just above reasoners’ current levels” 4 .

Fostering the development of logical reasoning should be an important goal of education. Yet, studies indicate that logical reasoning is hard even for educated adults and relies on the activation of an extensive network of brain regions. Neuroscience studies also demonstrate that reasoning with concrete information involves brain regions that qualitatively differ from those involved in reasoning with more abstract information, explaining why transitioning from concrete to abstract reasoning is challenging for children. We nonetheless reviewed here the more recent research on the development of reasoning skills and suggest several important factors that scaffold children’s reasoning abilities, such as world knowledge and self-regulation functions. On a final note, it is important to consider that logical reasoning is not something that we always do on our own, isolated from our peers. In fact, some have argued that the very function of reasoning is to argue with our peers (i.e., to find the best arguments to convince others and to evaluate arguments made by others) 8 . This idea is interesting from an educational point of view because it suggests that reasoning with others might be easier than reasoning in isolation—a hypothesis validated by several studies. For example, performance on the card task developed by Peter Wason is much higher when participants solve it as a group rather than alone 8 . Therefore, encouraging activities in which children reason with others might also be a fruitful avenue for stimulating the reasoning brain.

  • Wason, P. C. Reasoning. In New Horizons in Psychology (ed. Foss, B. M.). (Penguin: Harmondsworth, 1966).
  • Prado, J., & Noveck, I. A. Overcoming perceptual features in logical reasoning: A parametric functional magnetic resonance imaging study. J Cogn Neurosci . 19(4): 642-57 (2007).
  • Cheng, P. W. et al. Pragmatic versus syntactic approaches to training deductive reasoning. Cogn Psychol . 18(3): 293-328 (1986).
  • Markovits, H. How to develop a logical reasoner. In The Developmental Psychology of Reasoning and Decision-Making (ed. Markovits, H.) 148-164. (Psychology Press: Hove, UK, 2014).
  • Goel, V. Anatomy of deductive reasoning. Trends Cogn. Sci. (Reg. Ed.) 11(10): 435-41 (2007).
  • Markovits, H., & Lortie-Forgues, H. Conditional reasoning with false premises facilitates the transition between familiar and abstract reasoning. Child Development 82(2): 646-660 (2011).
  • Diamond, A., & Lee, K. Interventions shown to aid executive function development in children 4 to 12 years old. Science 333(6045): 959-964 (2011).
  • Mercier, H., & Sperber, D. Why do humans reason? Arguments for an argumentative theory. Behav Brain Sci . 34(2): 57-74; discussion 74-111 (2011).

Christopher Dwyer Ph.D.

Neuroscience and How Students Learn

This article is based on a talk by Daniela Kaufer , associate professor in the Department of Integrative Biology, for the GSI Teaching & Research Center’s How Students Learn series in Spring 2011.

On this page: Key Learning Principles Research Fundamentals Applications to Teaching Further Reading

Also available: Video and full summary of Daniela Kaufer’s talk “What Can Neuroscience Research Teach Us about Teaching?”

Key Learning Principles

  • From the point of view of neurobiology, learning involves changing the brain.
  • Moderate stress is beneficial for learning, while mild and extreme stress are detrimental to learning.
  • Adequate sleep, nutrition, and exercise encourage robust learning.
  • Active learning takes advantage of processes that stimulate multiple neural connections in the brain and promote memory.

Research Fundamentals

Changing the brain: For optimal learning to occur, the brain needs conditions under which it is able to change in response to stimuli (neuroplasticity) and able to produce new neurons (neurogenesis).

The most effective learning involves recruiting multiple regions of the brain for the learning task. These regions are associated with such functions as memory, the various senses, volitional control, and higher levels of cognitive functioning.

kaufer-inverted-u-curve

Moderate stress can be introduced in many ways: by playing unfamiliar music before class, for example, or changing up the format of discussion, or introducing any learning activity that requires individual participation or movement. However, people do not all react the same way to an event. The production of cortisol in response to an event varies significantly between individuals; what constitutes “moderate stress” for one person might constitute mild or extreme stress for another. So, for example, cold-calling on individual students in a large-group setting might introduce just the right amount of stress to increase some students’ performance, but it might produce excessive stress and anxiety for other students, so their performance is below the level you know they are capable of. Any group dynamic that tends to stereotype or exclude some students also adds stress for them.

Adequate sleep, good nutrition, and regular exercise: These common-sense healthy habits promote optimal learning performance in two ways. First, they promote neuroplasticity and neurogenesis. Second, they keep cortisol and dopamine (stress and happiness hormones, respectively) at appropriate levels. All-night cramming sessions, skipped meals, and skipped exercise can actually reduce the brain’s capacity for high academic performance. (This is true for instructors as well as students.)

Diagram roughly mapping the verbs of Bloom's Taxonomy in the Cognitive Domain onto regions of the human brain.

More complex thought processes are more beneficial for learning because they involve a greater number of neural connections and more neurological cross-talk. Active learning takes advantage of this cross-talk, stimulating a variety of areas of the brain and promoting memory.

Applications to Teaching

Classroom Activities , from the Teaching Guide for GSIs

Some Basic Active Learning Strategies , from the University of Minnesota Center for Educational Innovation

Further Reading

Please note that some links may require Library proxy access. Please see the Library’s page Connect from Off Campus .

Blakemore, Sarah-Jayne and Uta Frith (2005). The Learning Brain: Lessons for Education . Malden, MA: Blackwell.

Felder, Richard M. and Rebecca Brent (1996). “ Navigating the Bumpy Road to Student-Centered Instruction. ” An abridged version of this article was published in College Teaching 44: 43–7.

Tokuhama-Espinosa, Tracey (2011). Mind, Brain, and Education Science: A Comprehensive Guide to the New Brain-Based Teaching . New York: W. W. Norton.

Walker, J. D. et al. (2008). “ A Delicate Balance: Integrating Active Learning into a Large Lecture Course. ” CBE Life Sciences Education 7.4: 361–67.

Winter, Dale et al. (2001). “ Novice Instructors and Student-Centered Instruction: Identifying and Addressing Obstacles to Learning in the College Science Laboratory. ” The Journal of Scholarship of Teaching and Learning 2.1: 14–42.

Greater Good Science Center • Magazine • In Action • In Education

Politics Articles & More

The benefits and drawbacks of intuitive thinking, relying on our intuitions can help us be creative, but it might also contribute to conspiracy theories..

I have been researching the psychology of conspiracy beliefs for seven years now and people often ask me why people believe in them. This is not a simple question.

There are many reasons people might endorse conspiracy theories. Something that stands out to me, though, is how our thinking styles can influence the way we process information and therefore how prone we can be to conspiracy beliefs.

A preference for intuitive thinking over analytical thinking styles seems to be linked to endorsement of conspiracy theories.

neuroscience of critical thinking

Intuitive thinking is a thinking style reliant on immediate and unconscious judgments. It often follows gut feelings, whereas analytical thinking is about slower, more deliberate and detailed processing of information.

I’ve written before about how we can develop a more effortful, analytical thinking style to reduce our predisposition to conspiracy beliefs.

Research has shown critical thinking skills have many life benefits. For example, a study from 2017 found that people who scored higher in critical thinking skills reported fewer negative life events (for instance, getting a parking ticket or missing a flight). Critical thinking was a stronger predictor than intelligence for avoiding these types of events. It’s not clear why this is.

On the other hand, intuitive thinking has been linked to thinking errors. For example, intuitive thinking styles can lead to over-reliance on mental shortcuts, which can also increase susceptibility to conspiracy theories . 

This can lead to dangerous consequences. For example, greater intuitive thinking has been linked to anti-vaccine conspiracy beliefs and vaccine hesitancy.

However, extremely successful people , such as Albert Einstein and Apple cofounder Steve Jobs, argued the importance of using their intuition and attributed their achievements to intuitive thinking.

The value of intuitive thinking

One benefit of intuitive thinking is that it takes little or no processing time, which allows us to make decisions and judgments quickly. And, in some circumstances, this is vital.

People working in crisis environments (such as the fire service) report the need to use intuitive thinking styles. During crises, it can be unrealistic to consistently use analytical thinking.

Experienced crisis managers often rely on intuitive thinking in the first instance, as their default strategy but, as the task allows, draw on more analytical thinking later on. Critical and intuitive thinking styles can be used in tandem.

What is important also is that this type of intuition develops through years of experience, which can produce expert intuition .

Intuition can be crucial in other areas, too. Creativity is often seen as a benefit of intuitive thinking styles. A review conducted in 2016 of research into idea generation found that creativity is positively linked to intuitive thinking.

Although creativity is difficult to define, it can be thought of as similar to problem solving, where information is used to reach a goal, in a new or unexpected way.

However, it is also important to note that the 2016 review found that combining intuitive and analytical thinking styles was best for idea evaluation.

What is the solution?

Now, research often focuses on developing ways to improve analytical thinking in order to reduce endorsement of dangerous conspiracy theories or reduce thinking errors and misconceptions .

However, we often consider analytic and intuitive thinking styles as an either-or, and when making decisions or judgments we must choose one over the other. However, a 2015 meta-analysis (where data from multiple studies are combined and analyzed) of 50 years of cognitive-style research found evidence that these thinking styles could happen at the same time.

Rather than two opposing ends of a spectrum, they are separate constructs, meaning that these thinking styles can happen together. Research in decision making also suggests that thinking style is flexible and the best decisions are made when the thinking style a person uses aligns with the situation at hand.

Some situations are more suited to analytical thinking styles (such as number tasks), while some are more suited to using intuition (such as understanding facial expressions). An adaptive decision maker is skilled in using both thinking styles.

So perhaps one way to reduce susceptibility to conspiracy theories is improving adaptive decision making. My 2021 study found that when people were confronted with the misconceptions they had previously made, overestimating the extent to which others endorse anti-vaccine conspiracy theories, they re-evaluated their decisions. This could suggest that thinking styles can depend on the situation and information at hand.

Although in many situations analytical thinking is better , we shouldn’t dismiss the intuitive thinking style conspiracy theorists seem to favor as unworkable or inflexible. The answer could lie in understanding both thinking styles and being able to adjust our thinking styles when needed.

This article is republished from The Conversation under a Creative Commons license. Read the original article .

About the Author

Darel cookson.

Darel Cookson, Ph.D. , is a senior lecturer in psychology at Nottingham Trent University.

You May Also Enjoy

People Who Feel Excluded Are Susceptible to Conspiracy Theories

This article — and everything on this site — is funded by readers like you.

Become a subscribing member today. Help us continue to bring “the science of a meaningful life” to you and to millions around the globe.

  • How to apply critical thinking in learning

Sometimes your university classes might feel like a maze of information. Consider critical thinking skills like a map that can lead the way.

Why do we need critical thinking?  

Critical thinking is a type of thinking that requires continuous questioning, exploring answers, and making judgments. Critical thinking can help you: 

  • analyze information to comprehend more thoroughly
  • approach problems systematically, identify root causes, and explore potential solutions 
  • make informed decisions by weighing various perspectives 
  • promote intellectual curiosity and self-reflection, leading to continuous learning, innovation, and personal development 

What is the process of critical thinking? 

1. understand  .

Critical thinking starts with understanding the content that you are learning.

This step involves clarifying the logic and interrelations of the content by actively engaging with the materials (e.g., text, articles, and research papers). You can take notes, highlight key points, and make connections with prior knowledge to help you engage.

Ask yourself these questions to help you build your understanding:  

  • What is the structure?
  • What is the main idea of the content?  
  • What is the evidence that supports any arguments?
  • What is the conclusion?

2. Analyze  

You need to assess the credibility, validity, and relevance of the information presented in the content. Consider the authors’ biases and potential limitations in the evidence. 

Ask yourself questions in terms of why and how:

  • What is the supporting evidence?  
  • Why do they use it as evidence?   
  • How does the data present support the conclusions?  
  • What method was used? Was it appropriate?  

 3.  Evaluate   

After analyzing the data and evidence you collected, make your evaluation of the evidence, results, and conclusions made in the content.

Consider the weaknesses and strengths of the ideas presented in the content to make informed decisions or suggest alternative solutions:

  • What is the gap between the evidence and the conclusion?  
  • What is my position on the subject?  
  • What other approaches can I use?  

When do you apply critical thinking and how can you improve these skills?   

1. reading academic texts, articles, and research papers.

  • analyze arguments
  • assess the credibility and validity of evidence
  • consider potential biases presented
  • question the assumptions, methodologies, and the way they generate conclusions

2. Writing essays and theses

  • demonstrate your understanding of the information, logic of evidence, and position on the topic
  • include evidence or examples to support your ideas
  • make your standing points clear by presenting information and providing reasons to support your arguments
  • address potential counterarguments or opposing viewpoints
  • explain why your perspective is more compelling than the opposing viewpoints

3. Attending lectures

  • understand the content by previewing, active listening , and taking notes
  • analyze your lecturer’s viewpoints by seeking whether sufficient data and resources are provided
  • think about whether the ideas presented by the lecturer align with your values and beliefs
  • talk about other perspectives with peers in discussions

Facebook logo

Related blog posts

  • A beginner's guide to successful labs
  • A beginner's guide to note-taking
  • 5 steps to get the most out of your next reading
  • How do you create effective study questions?
  • An epic approach to problem-based test questions

Recent blog posts

Blog topics.

  • assignments (1)
  • Graduate (2)
  • Learning support (25)
  • note-taking and reading (6)
  • organizations (1)
  • tests and exams (8)
  • time management (3)
  • Tips from students (6)
  • undergraduate (27)
  • university learning (10)

Blog posts by audience

  • Current undergraduate students (27)
  • Current graduate students (3)
  • Future undergraduate students (9)
  • Future graduate students (1)

Blog posts archive

  • December (1)
  • November (6)
  • October (8)
  • August (10)

University of Waterloo

Contact the Student Success Office

South Campus Hall, second floor University of Waterloo 519-888-4567 ext. 84410

Immigration Consulting

Book a same-day appointment on Portal  or submit an online inquiry  to receive immigration support.

Request an authorized leave from studies for immigration purposes. 

Quick links

Current student resources

SSO staff links

Employment and volunteer opportunities

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations .

  • About The Author – Steven Novella, MD
  • Recent Comments
  • Topic Suggestions

neuroscience of critical thinking

Mar 21 2024

  • Using CRISPR To Treat HIV

neuroscience of critical thinking

There is also the potential for CRISPR to be used as a direct therapy in medicine. In 2023 the first regulatory approval for CRISPR as a treatment for a disease was given to treatments for sickle cell disease and thalassemia. These diseases were targeted for a technical reason – you can take bone marrow out of a patient, use CRISPR to alter the genes for hemoglobin, and then put it back in. What’s really tricky about using CRISPR as a medical treatment is not necessarily the genetic change itself, but getting the CRISPR to the correct cells in the body. This requires a vector, and is the most challenging part of using CRISPR as a medical intervention. But if you can bring the cells to the CRISPR that eliminates the problem.

Continue Reading »

Comments: 0

Mar 18 2024

  • Energy Demand Increasing

neuroscience of critical thinking

First, I have to state my usual skeptical caveat – these are projections, and we have to be wary of projecting short term trends indefinitely into the future. The numbers look like a blip on the graph, and it seems weird to take that blip and extrapolate it out. But these forecasts are not just based on looking at such graphs and then extending the line of current trends. These are based on an industry analysis which includes projects that are already under way. So there is some meat behind these forecasts.

What are the factors that seem to be driving this current and projected increase in electricity demand? They are all the obvious ones you might think. First, something which I and other technology-watchers predicted, is the increase in the use of electrical vehicles. In the US there are more than 2.4 million registered electric vehicles. While this is only about 1% of the US fleet , EVs represent about 9% of new car sales , and growing. If we are successful in somewhat rapidly (it will still take 20-30 years) changing our fleet of cars from gasoline engine to electric or hybrid, that represents a lot of demand on the electricity grid. Some have argued that EV charging is mostly at night (off peak), so this will not necessarily require increased electricity production capacity, but that is only partly true. Many people will still need to charge up on the road, or will charge up at work during the day, for example. It’s hard to avoid the fact that EVs represent a potential massive increase in electricity demand. We need to factor this in when planning future electricity production.

Another factor is data centers. The world’s demand for computer cycles is increasing, and there are already plans for many new data centers, which are a lot faster to build than the plants to power them. Recent advances in AI only increase this demand. Again we may mitigate this somewhat by prioritizing computer advances that make computers more energy efficient, but this will only be a partial offset. We do also have to think about applications, and if they are worth it. The one that gets the most attention is crypto – by one estimate Bitcoin mining alone used 121 terra-watt hours of electricity in 2023, the same as the Netherlands (with a population of 17 million people).

Mar 15 2024

  • What Is a Grand Conspiracy?

neuroscience of critical thinking

For blog posts I also tend to rely on links to previous articles for background, and I have little patience for those who cannot bother to click these links to answer their questions or before making accusations about not having properly defined a term, for example. I don’t expect people to have memorized my entire catalogue, but click the links that are obviously there to provide further background and explanation. Along those lines, I suspect I will be linking to this very article in all my future articles about conspiracy theories.

What is a grand conspiracy theory? First a bit more background, about categorization itself. There are two concepts I find most useful when thinking about categories – operational definition and defining characteristics. An operational definition is one that essentially is a list of inclusion and exclusion criteria, a formula, that if you follow, will determine if something fits within the category or not. It’s not a vague description or general concept – it is a specific list of criteria that can be followed “operationally”. This comes up a lot in medicine when defining a disease. For example, the operational definition of “essential hypertension” is persistent (three readings or more) systolic blood pressure over 130 or diastolic blood pressure over 80.

Mar 12 2024

  • Pentagon Report – No UFOs

neuroscience of critical thinking

“To date, AARO has not discovered any empirical evidence that any sighting of a UAP represented off-world technology or the existence a classified program that had not been properly reported to Congress.”

They reviewed evidence from 1945 to 2023, including interviews, reports, classified and unclassified archives, spanning all “official USG investigatory efforts” regarding possible alien activity. They found nothing – nada, zip, goose egg, zero. They did not find a single credible report or any physical evidence. They followed up on all the fantastic claims by UFO believers (they now use the term UAP for unidentified anomalous phenomena), including individual sightings, claims of secret US government programs, claims of reverse engineering alien technology or possessing alien biological material.

They found that all eyewitness accounts were either misidentified mundane phenomena (military aircraft, drones, etc), or simply lacked enough evidence to resolve. Eyewitness accounts of secret government programs were all misunderstood conversations or hearsay, often referring to known and legitimate military or intelligence programs. Their findings are familiar to any experience skeptic – people misinterpret what they see and hear, fitting their misidentified perception into an existing narrative. This is what people do. This is why we need objective evidence to know what is real and what isn’t.

I know – this is a government report saying the government is not hiding evidence of aliens. This is likely to convince no hard-core believer. Anyone using conspiracy arguments to prop up their claims of aliens will simply incorporate this into their conspiracy narrative. Grand conspiracy theories are immune to evidence and logic, because the conspiracy can be used to explain away anything – any lack of evidence, or any disconfirming evidence. It is a magic box in which any narrative can be true without the burden of evidence or even internal consistency.

Mar 11 2024

  • Mach Effect Thrusters Fail

neuroscience of critical thinking

Speculative technology, however, may or may not even be possible within the laws of physics. Such technology is usually highly disruptive, seems magical in nature, but would be incredibly useful if it existed. Common technologies in this group include faster than light travel or communication, time travel, zero-point energy, cold fusion, anti-gravity, and propellantless thrust. I tend to think of these as science fiction technologies, not just speculative. The big question for these phenomena is how confident are we that they are impossible within the laws of physics. They would all be awesome if they existed (well, maybe not time travel – that one is tricky), but I am not holding my breath for any of them. If I had to bet, I would say none of these exist.

That last one, propellantless thrust, does not usually get as much attention as the other items on the list. The technology is rarely discussed explicitly in science fiction, but often it is portrayed and just taken for granted. Star Trek’s “impulse drive”, for example, seems to lack any propellant. Any ship that zips into orbit like the Millennium Falcon likely is also using some combination of anti-gravity and propellantless thrust. It certainly doesn’t have large fuel tanks or display any exhaust similar to a modern rocket.

In recent years NASA has tested two speculative technologies that claim to be able to produce thrust without propellant – the EM drive and the Mach Effect thruster (MET). For some reason the EM drive received more media attention ( including from me ), but the MET was actually the more interesting claim. All existing forms of internal thrust involve throwing something out the back end of the ship. The conservation of momentum means that there will be an equal and opposite reaction, and the ship will be thrust in the opposite direction. This is your basic rocket. We can get more efficient by accelerating the propellant to higher and higher velocity, so that you get maximal thrust from each atom or propellant your ship carries, but there is no escape from the basic physics. Ion drives are perhaps the most efficient thrusters we have, because they accelerate charged particles to relativistic speeds, but they produce very little thrust. So they are good for moving ships around in space but cannot get a ship off the surface of the Earth.

Mar 07 2024

  • Is the AI Singularity Coming?

neuroscience of critical thinking

Such rapid advances legitimately make one wonder where we will be in 5, 10, or 20 years. Computer scientist Ben Goertzel, who popularized the term AGI (artificial general intelligence), recently stated during a presentation that he believes we will achieve not only AGI but an AGI singularity involving a superintelligent AGI within 3-8 years. He thinks it is likely to happen by 2030, but could happen as early as 2027.

My reaction to such claims, as a non-expert who follows this field closely, is that this seems way to optimistic. But Goertzel is an expert, so perhaps he has some insight into research and development that’s happening in the background that I am not aware of. So I was very interested to see his line of reasoning. Will he hint at research that is on the cusp of something new?

Goertzel laid out three lines of reasoning to support his claim. The first is simply extrapolating from the recent exponential grown of narrow AI. He admits that LLM systems and other narrow AI are not themselves on a path to AGI, but they show the rapid advance of the technology. He aligns himself here with Ray Kurzweil, who apparently has a new book coming out, The Singularity is Nearer. Kurzweil has a reputation for predicting advances in computer technology that were overly optimistic, so that is not surprising.

Mar 04 2024

  • Climate Sensitivity and Confirmation Bias

neuroscience of critical thinking

First let me review the relevant background. ECS is a measure of how much climate warming will occur as CO2 concentration in the atmosphere increases, specifically the temperature rise in degrees Celsius with a doubling of CO2 (from pre-industrial levels). This number of of keen significance to the climate change problem, as it essentially tells us how much and how fast the climate will warm as we continue to pump CO2 into the atmosphere. There are other variables as well, such as other greenhouse gases and multiple feedback mechanisms, making climate models very complex, but the ECS is certainly a very important variable in these models.

There are multiple lines of evidence for deriving ECS, such as modeling the climate with all variables and seeing what the ECS would have to be in order for the model to match reality – the actual warming we have been experiencing. Therefore our estimate of ECS depends heavily on how good our climate models are. Climate scientists use a statistical method to determine the likely range of climate sensitivity. They take all the studies estimating ECS, creating a range of results, and then determine the 90% confidence range – it is 90% likely, given all the results, that ECS is between 2-5 C .

Mar 01 2024

Virtual Walking

neuroscience of critical thinking

But researchers are working on ways to make virtual walking a more compelling, realistic, and less nausea-inducing experience. A team from the Toyohashi University of Technology and the University of Tokyo studied virtual walking and introduced two new variables – they added a shadow to the avatar, and they added vibration sensation to the feet. An avatar is a virtual representation of the user in the virtual space. Most applications allow some level of user control over how the avatar is viewed, but typically either first person (you are looking through the avatar’s eyes) or third person (typically your perspective is floating above and behind the avatar). In this study they used only first person perspective, which makes sense since they were trying to see how realistic an experience they can create.

The shadow was always placed in front of the avatar and moved with the avatar. This may seem like a little thing, but it provides visual feedback connecting the desired movements of the user with the movements of the avatar. As weird as this sounds, this is often all that it takes to not only feel as if the user controls the avatar but is embodied within the avatar. (More on this below.) Also they added four pads to the bottom of the feet, two on each foot, on the toe-pad and the heel. These vibrated in coordination with the virtual avatar’s foot strikes. How did these two types of sensory feedback affect user perception?

Feb 27 2024

Frozen Embryos Are Not People

neuroscience of critical thinking

The relevant politics have been hashed out by many others. What I want to weigh in on is the relevant logic. Two years ago I wrote about the question of when a fetus becomes a person . I laid out the core question here – when does a clump of cells become a person? Standard rhetoric in the anti-abortion community is to frame the question differently, claiming that from the point of fertilization we have human life. But from a legal, moral, and ethical perspective, that is not the relevant question. My colon is human life, but it’s not a person. Similarly, a frozen clump of cells is not a child.

This point inevitably leads to the rejoinder that those cells have the potential to become a person. But the potential to become a thing is not the same same as being a thing. If allowed to develop those cells have the potential to become a person – but they are not a person. This would be analogous to pointing to a stand of trees and claiming it is a house. Well, the wood in those trees has the potential to become a house. It has to go through a process, and at some point you have a house.

That analogy, however, breaks down when you consider that the trees will not become a house on their own. An implanted embryo will become a child (if all goes well) unless you do something to stop it. True but irrelevant to the point. The embryo is still not a person. The fact that the process to become a person is an internal rather than external one does not matter. Also, the Alabama Supreme Court is extending the usual argument beyond this point – those frozen embryos will not become children on their own either. They would need to go through a deliberate, external, artificial process in order to have the full potential to develop into a person. In fact, they would not exist without such a process.

Feb 23 2024

Odysseus Lands on the Moon

neuroscience of critical thinking

Only five countries have ever achieved a soft landing on the moon, America, China, Russia, Japan, and India. Only America did so with a crewed mission, the rest were robotic. Even though this feat was first accomplished in 1966 by the Soviet Union, it is still an extremely difficult thing to pull off. Getting to the Moon requires powerful rocket. Inserting into lunar orbit requires a great deal of control, on a craft that is too far away for real time remote control. This means you either need pilots on the craft, or the craft is able to carry out a pre-programmed sequence to accomplish this goal. Then landing on the lunar surface is tricky. There is no atmosphere to slow the craft down, but also no atmosphere to get in the way. As the ship descends it burns fuel, which constantly changes the weight of the vehicle. It has to remain upright with respect to the lunar surface and reduce its speed by just the right amount to touch down softly – either with a human pilot or all by itself.

The Odysseus mission is funded by NASA as part of their program to develop private industry to send instruments and supplies to the Moon. It is the goal of their Artemis mission to establish a permanent base on the moon, which will need to be supported by regular supply runs. In January another company with a NASA grant under the same program, Astrobotic Technology, sent their own craft to the Moon, the Peregrine . However, a fuel leak prevented the craft from orienting its solar panels toward the sun, and the mission had to be abandoned. This left the door open for the Odysseus mission to grab the achievement of being the first private company to do so.

Next »

  • Search for:

Recent Posts

Affiliated sites.

  • James Randi Educational Foundation
  • Society for Science Based Medicine
  • The New England Skeptical Society
  • The Skeptics Guide to the Universe Weekly Science Podcast

General Science Blogs

  • Scientific American Blog

Skeptical Blogs

  • Bad Astronomy
  • Respectful Insolence
  • Science Based Medicine
  • SGU Blog – The Rogues Gallery
  • Skepchick Blog
  • SkepticBlog

Skeptical Merchandise

  • Medical Myths by Dr. Novella
  • NeuroLogica Blog on Kindle
  • SBM e-Books on Amazon
  • SBM e-Books on iTunes
  • SBM e-Books on Nook
  • Your Deceptive Mind by Dr. Novella

neuroscience of critical thinking

  • The 2008 Weblog Awards

Top neuroscience blogs          Top rationality blogs

The 2010 HAL Medical Blog Awards

Pre-order our NEW BOOK Today!

neuroscience of critical thinking

  • Privacy Policy
  • Website Terms of Use
  • Conspiracy Theories
  • Creationism/ID
  • Culture and Society
  • General Science
  • History of Science/Medicine
  • Legal Issues
  • Logic/Philosophy
  • Neuroscience
  • Pseudoscience
  • Religion/Miracles
  • Science and Medicine
  • Science and the Media
  • Science Denial
  • UFO's / Aliens
  • Entries feed
  • Comments feed
  • WordPress.org
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • February 2007
  • January 2007

Managed by Digital Gravity Media | Free WordPress Theme NeuroLogica Blog Copyright © 2024 All Rights Reserved .

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • About the Director
  • Advisory Boards and Groups
  • Strategic Plan
  • Offices and Divisions
  • Careers at NIMH
  • Staff Directories
  • Getting to NIMH

Photo of Joshua A. Gordon, M.D., Ph.D.

Decoding the Mind: Basic Science Revolutionizes Treatment of Mental Illnesses

By Linda Brady, Margaret Grabb, Susan Koester, Yael Mandelblat-Cerf, David Panchision, Jonathan Pevsner, Ashlee Van’t-Veer, and Aleksandra Vicentic on behalf of the NIMH Division of Neuroscience and Basic Behavioral Science

March 21, 2024 • 75th Anniversary

Follow the NIMH Director on

For 75 years, NIMH has transformed the understanding and treatment of mental illnesses through basic and clinical research—bringing hope to millions of people. This Director’s Message, guest written by NIMH’s Division of Neuroscience and Basic Behavioral Science , is part of an anniversary series celebrating this momentous milestone.

The Division of Neuroscience and Basic Behavioral Science (DNBBS) at the National Institute of Mental Health (NIMH) supports research on basic neuroscience, genetics, and basic behavioral science. These are foundational pillars in the quest to decode the human mind and unravel the complexities of mental illnesses.

At NIMH, we are committed to supporting and conducting genomics research as a priority research area . As the institute celebrates its 75th Anniversary , we are spotlighting DNBBS-supported efforts connecting genes to cells to circuits to behavior that have led to a wealth of discoveries and knowledge that can improve the diagnosis, treatment, and prevention of mental illnesses.

Making gene discoveries

Illustration of a human head showing a brain and DNA.

Medical conditions often run in families. For instance, if someone in your immediate family has high blood pressure, you are more likely to have it too. It is the same with mental disorders—often they run in families. NIMH is supporting research into human genetics to better understand why this occurs. This research has already led to the discovery of hundreds of gene variants that make us more or less likely to develop a mental disorder.

There are two types of genetic variation: common and rare. Common variation refers to DNA changes often seen in the general population, whereas rare variation is DNA changes found in only a small proportion of the population. Individually, most common gene variants have only a minor impact on the risk for a mental disorder. Instead, most disorders result from many common gene variants that, together, contribute to the risk for and severity of that disorder.

NIMH is committed to uncovering the role of genes in mental disorders with the aim of improving the lives of people who experience them. One of the many ways NIMH contributes to the discovery of common gene variants is by supporting the Psychiatric Genomics Consortium (PGC)   . The consortium of almost 1,000 scientists across the globe, including ones in the NIMH Intramural Research Program and others conducting NIMH-supported research, is one of the largest and most innovative biological investigations in psychiatry.

Global collaborations such as the PGC are critical to amassing the immense sample sizes needed to identify common gene variants. Data from the consortium’s almost one million participants have already led to transformative insights about genetic contributors to mental illnesses and the genetic relationships of these illnesses to each other. To date, studies conducted as part of the consortium have uncovered common variation in over a dozen mental illnesses.

In contrast to common gene variants, rare gene variants are very uncommon in the general population. When they do occur, they often have a major impact on the occurrence of an illness, particularly when they disrupt gene function or regulation. Rare variants involving mutations in a single gene have been linked to several mental disorders, often through NIMH-supported research. For instance, a recent NIMH-funded study found that rare variation in 10 genes substantially increased the risk for schizophrenia. However, it is important to note that genetics is not destiny; even rare variants only raise the risk for mental disorders, but many other factors, including your environment and experiences, play important roles as well.

Because of the strong interest among researchers and the public in understanding how genes translate to changes in the brain and behavior, NIMH has developed a list of human genes associated with mental illnesses. These genes were identified through rare variation studies and are meant to serve as a resource for the research community. The list currently focuses on rare variants, but NIMH plans to continue expanding it as evidence accumulates for additional gene variants (rare or common).

Moreover, mental illnesses are a significant public health burden worldwide . For this reason, NIMH investments in genomics research extend across the globe. NIMH has established the Ancestral Populations Network (APN) to make genomics studies more diverse and shed light on how genetic variation contributes to mental disorders across populations. APN currently includes seven projects with more than 100 researchers across 25 sites worldwide.

World map showing the location of projects in the Ancestral Populations Network: USA, Mexico, Ecuador, Peru, Chile, Colombia, Brazil, Argentina, Nigeria, South Africa, Uganda, Ethiopia, Kenya, Pakistan, India, Singapore, Taiwan, and South Korea.

Connecting biology to behavior

While hundreds of individual genes have been linked to mental illnesses, the function of most of these genes in the brain remains poorly understood. But high-tech advances and the increased availability of computational tools are enabling researchers to begin unraveling the intricate roles played by genes.

In addition to identifying genetic variation that raises the risk for mental illnesses, NIMH supports research that will help us understand how genes contribute to human behavior. This information is critical to discovering approaches to diagnose, treat, and ultimately prevent or cure mental illnesses.

An NIMH-funded project called the PsychENCODE consortium   focuses on understanding how genes impact brain function. PsychENCODE is furthering knowledge of how gene risk maps onto brain function and dysfunction by cataloging genomic elements in the human brain and studying the actions of different cell types. The PsychENCODE dataset currently includes multidimensional genetic data from the postmortem brains of thousands of people with and without mental disorders.

Findings from the first phase of PsychENCODE were published as a series of 11 papers   examining functional genomics in the developing and adult brains and in mental disorders. A second batch of PsychENCODE papers will be published later this year. These findings help clarify the complex relationships between gene variants and the biological processes they influence.

PsychENCODE and other NIMH-supported projects are committed to sharing biospecimens quickly and openly to help speed research and discovery.

Logo for the NIMH Repository and Genomics Resource showing a brain and a test tube.

Facilitating these efforts is the NIMH Repository and Genomics Resource (NRGR)   , where samples are stored and shared. NRGR includes hundreds of thousands of samples, such as DNA, RNA, and cell lines, from people with and without mental disorders, along with demographic and diagnostic information.

Logo for the Scalable and Systematic Neurobiology of Psychiatric and Neurodevelopmental Disorder Risk Genes (SSPsyGene) showing a brain made of puzzle pieces.

Another NIMH initiative to connect risk genes to brain function is Scalable and Systematic Neurobiology of Psychiatric and Neurodevelopmental Disorder Risk Genes (SSPsyGene) . This initiative uses cutting-edge techniques to characterize the biological functions of 250 mental health risk genes—within the cells where they are expressed—to better understand how those genes contribute to mental illnesses. By systematically characterizing the biological functions of risk genes in cells, SSPsyGene will empower researchers to learn about biological pathways that may serve as new targets for treatment.

Genes also affect behavior by providing the blueprint for neurons, the basic units of the nervous system. Neurons communicate with each other via circuits in the brain, which enables us to process, integrate, and convey information. NIMH supports many initiatives to study the foundational role of neural networks and brain circuits in shaping diverse mental health-related behaviors like mood, learning, memory, and motivation.

For instance, studies supported through a basic-to-translational science initiative at NIMH focus on modifying neural activity to improve cognitive, emotional, and social processing  . Similarly, another new funding opportunity encourages studies in humans and animals examining how emotional and social cues are represented across brain circuits  to help address a core deficit in many mental disorders. These studies will increase understanding of the biological mechanisms that support behavior throughout life and offer interventions to improve these functions in healthy and clinical populations.

Developing treatments and therapeutics

The gene discovery and biology-to-behavior programs described here will lay the foundation for delivering novel therapeutics. To be prepared to rapidly implement findings from this research, NIMH supports several initiatives to identify behavioral and biological markers for use in clinical studies and increase our ability to translate research into practice.

Through its therapeutics discovery research programs , NIMH advances early stage discovery and development studies in humans and early efficacy trials for mental disorders. Taking these efforts a step further, NIMH supports the National Cooperative Drug Discovery/Development Groups for the Treatment of Mental Disorders , which encourage public–private partnerships to accelerate the discovery and development of novel therapeutics and new biomarkers for use in human trials. Moreover, NIMH is one of several institutes and centers in the NIH Blueprint Neurotherapeutics Network  , launched to enable neuroscientists in academia and biotechnology companies to develop new drugs for nervous system disorders.

Graphic showing advancing pathway from exploratory and hit-to lead to lead optimization to scale up and manufacturing to IND enabling, to Phase 1 clinical trial and with exit outcomes of external funding and partnerships, other grants, and attrition.

For the treatments of tomorrow, NIMH is building a new research program called Pre-Clinical Research on Gene Therapies for Rare Genetic Neurodevelopmental Disorders  , which encourages early stage research to optimize gene therapies to treat disorders with prominent cognitive, social, or affective impairment. In parallel, NIMH’s Planning Grants for Natural History Studies of Rare Genetic Neurodevelopmental Disorders  encourage the analysis of pre-existing data from people with rare disorders to learn about disease progression and enable future clinical trials with these populations.

NIMH's Division of Neuroscience and Basic Behavioral Science supports many different research projects that help us learn about genes and gene functions, how the brain develops and works, and impacts on behavior. By investing in basic neuroscience, genetics, and behavioral research, we're trying to find new targets for treatment and develop better therapies for mental disorders. We're hopeful these efforts will lead to new ways to treat and prevent mental illnesses in the near future and, ultimately, improve the lives of people in this country and across the globe.

  • Reference Manager
  • Simple TEXT file

People also looked at

Hypothesis and theory article, what’s so critical about critical neuroscience rethinking experiment, enacting critique.

neuroscience of critical thinking

  • 1 Department of Social Science Health and Medicine, King’s College London, London, UK
  • 2 Interacting Minds Centre, Aarhus University, Aarhus C, Aarhus, Denmark
  • 3 Department of Culture and Society - Section for Anthropology and Ethnography, Aarhus University, Højbjerg, Denmark

In the midst of on-going hype about the power and potency of the new brain sciences, scholars within “Critical Neuroscience” have called for a more nuanced and sceptical neuroscientific knowledge-practice. Drawing especially on the Frankfurt School, they urge neuroscientists towards a more critical approach—one that re-inscribes the objects and practices of neuroscientific knowledge within webs of social, cultural, historical and political-economic contingency. This paper is an attempt to open up the black-box of “critique” within Critical Neuroscience itself. Specifically, we argue that limiting enactments of critique to the invocation of context misses the force of what a highly-stylized and tightly-bound neuroscientific experiment can actually do. We show that, within the neuroscientific experiment itself, the world-excluding and context-denying “rules of the game” may also enact critique, in novel and surprising forms, while remaining formally independent of the workings of society, and culture, and history. To demonstrate this possibility, we analyze the Optimally Interacting Minds (OIM) paradigm, a neuroscientific experiment that used classical psychophysical methods to show that, in some situations, people worked better as a collective, and not as individuals—a claim that works precisely against reactionary tendencies that prioritize individual over collective agency, but that was generated and legitimized entirely within the formal, context-denying conventions of neuroscientific experimentation. At the heart of this paper is a claim that it was precisely the rigors and rules of the experimental game that allowed these scientists to enact some surprisingly critical, and even radical, gestures. We conclude by suggesting that, in the midst of large-scale neuroscientific initiatives, it may be “experiment”, and not “context”, that forms the meeting-ground between neuro-biological and socio-political research practices.

Introduction

What is there still to say about the growth and prominence of the new brain sciences? Almost 10 years ago, Steven Rose (2005) pointed out that “the global scale of research effort now put into the neurosciences primarily in the US, but closely followed by Europe and Japan, has turned them from classical ‘little sciences’, into a major industry engaging large teams of researchers, involving billions of dollars from government…and the pharmaceutical industry” ( Rose, 2005 : 3). With the recent advent of the Human Brain Project ( Honigsbaum, 2013 ), and the BRAIN initiative ( Markoff and Gorman, 2013 ), this narrative of growth and expansion has scarcely changed. Certainly, if their influence (and desire for influence) is sometimes over-stated, neuroscientific spaces are now among the most potent and creative sites for understanding human beings, their subjectivities, and their societies ( Andreasen, 2001 ; Pickersgill et al., 2011 ; Rose and Abi-Rached, 2013 ).

Unsurprisingly, as the neurosciences have grown in size, prominence and prestige, so has critical sociological, philosophical, and historical analysis grown up around their foothills ( Martin, 2000 ; Dumit, 2004 ; Ortega and Vidal, 2007 ). Such works range from critiques of a frightening “neuro-reductionism” ( Martin, 2004 ), to interest in the philosophical implications of neuroscience ( Malabou, 2008 ), to a more delicate labor of carving out shared spaces of interest between the neuro-and social sciences ( Roepstorff and Frith, 2012 ; Rose and Abi-Rached, 2013 ; see also Fitzgerald and Callard, Forthcoming , for an analysis of these modes of engagement). In recent years, however, “Critical Neuroscience” has emerged as perhaps the most prominent, and certainly the most self-consciously “critical”, framework for thinking about the relationship between neuroscience, society, politics and economics ( Choudhury et al., 2009 ; Slaby, 2010 ; Slaby and Choudhury, 2012 ). Put crudely, scholars within this tradition, sometimes rooted in the Frankfurt School, and usually tilting at the hidden political and economic entanglements of neuroscientific assumptions, try to pull the experimental rhetoric and practice of neuroscience away from an organizing fantasy of distanced facthood, and towards a more concretely political and reflexive socio-critique –re-inscribing the objects of neuroscientific practice back within the webs of social, cultural and historical context to which they are always inevitably subject ( Choudhury et al., 2009 ).

What follows here is also an essay on neuroscience and critique. The paper is not a tediously scholastic disagreement with Critical Neuroscience: we have found too much of value in its corpus, have learned too much from the scholars within it, and have shared in too many events that have explored and expanded its core rubrics. Moreover, as scholars who labor in, on, and through, the contemporary neurosciences, we remain sensitive to the acuity of, and creative potential in, a critical insistence on the looping relationships between facts, politics, ideologies, and publics. But we also think that there are important lacunae in the growth of Critical Neuroscience. In this paper, we re-visit the relationship between neuroscience and critique, to propose that a more “critical” neuroscience is not only one that is attentive to its own context; sometimes, we suggest, a critical neuroscientific statement is produced through precisely the opposite strategy—by focusing on, and working with , the internal, world-excluding dynamics of the neuroscientific experiment. Our core argument is that limiting enactments of critique to social or historical context misses the force of what a highly-stylized and tightly-bounded neuroscientific experiment can actually do . We will show how the neuroscientific experiment may also enact critique, in novel and surprising ways, formally independent of an attention to the workings of society, or culture, or history.

We center this argument on one case study: a series of experiments called “Optimally Interacting Minds” (OIM), published in Science by Bahador Bahrami et al. in 2010. Two things interest us about this study: (1) it is made-up of a tightly-bound series of experimental demonstrations, cleaving closely to the conventional rules that make up the experimental game of psychophysics; and (2) it enacts and legitimizes a number of potentially “critical” interventions about the virtues of social and collective life, about the suboptimal performance and reasoning of the private individual, and about the nuanced—and deeply political—relationship of evidence and knowledge to forms of shared decision-making. We use this experiment to argue that “experiment”, as much as “context”, is a good basis for bringing a more “critical” neuroscience into being. But we will also suggest that such a focus may require the dilution of a well-worn (and perhaps, now, rather comfortable) insistence that the neuroscientist needs to focus on the indelible presence of “society” and “history” and “politics” and “economics” within her procedures. It requires commentators to think also about the rules of the experimental game itself, about the procedures that grant it its potency, and about the kinds of statements that those procedures make possible.

The paper is in four parts: first, drawing especially on three programmatic texts ( Choudhury et al., 2009 ; Slaby, 2010 ; Slaby and Choudhury, 2012 ) we offer a short exegesis of Critical Neuroscience, isolating what, precisely, is intended by the word “critical” within it; we then offer a brief account of the boundaries of experimentalism in social psychology and neuroscience; third, we introduce the OIM experiment, locating its core critical intervention in the broader sweep of neuroscientific experimentation; in the final section, we argue for a new way of relating to the neuroscientific experiment, whose rules and traditions, arcane as they seem, are not so isolated from critical and political statements as they sometimes appear. We conclude with a suggestion that, pace much discussion hitherto, it might be “experiment”, and not “context”, that forms the critical meeting-ground between neuro-biological and socio-political research practices, in the 21st century.

Locating Critique

At its heart, Critical Neuroscience is an attempt “to respond, both philosophically and scientifically, to the impressive and at times troublesome surge of the neurosciences” ( Slaby and Choudhury, 2012 : 29). Authors within this genre are not working to destabilize neuroscience: their more parsimonious and constructive goal is to question the broader cultural urge towards neuroscientific explanations, to point to the problematic bases both of this urge and of the brain science it wills into existence, and to imagine, beyond both, a different sort of neuroscience—one that is able to question its own “givens” and to recognize its own history and context; a discipline in which “historical, anthropological, philosophical, and sociological analysis can feed back and provide creative potential for experimental research in the laboratory” (ibid.: 29–30). At the center of this enterprise is a single qualifier: “critical”. It is the will to critique that legitimizes this programme, that drives it forward, and that organizes resources around it: “grounded in a framework of critical theorizing”, Choudhury et al. (2009) write elsewhere, “and in view of the social and cultural factors that shape research agendas and theories, Critical Neuroscience suggests ways to equip neuroscientific research with basic tools of critical practice” ( Choudhury et al., 2009 : 74).

But what, exactly, is the “critical” in “Critical Neuroscience”? Confronted with what they see –following Axel Honneth—as an emerging set of “social pathologies of reason” between the neurosciences and their contexts, Critical Neuroscience scholars interpret their task as opening up the black box of scientific facthood, and unveiling the deeply contingent socio-historical logics embedded within the neuroscientific fact (ibid.: 65 cf . Martínez Mateo et al., 2013 ). But what we wish to briefly explore here is the intellectual history and disciplinary genealogy mobilized by this term—including the forms of political, epistemological and economic commitment that are both held together within it, and excluded from it. If there is not space in this paper for an intellectual history of the urge to “critique” as such (see de Boer and Sonderegger, 2012 ), still we want to situate the use of this term more precisely within the broader oeuvre of a Critical Neuroscience.

Within that literature, various intellectual forebears are claimed for critique, including Kant ( Slaby and Choudhury, 2012 ), Axel Honneth ( Kirmayer, 2012 ), and even Bruno Latour ( Slaby, 2010 ). But if it seems difficult to assemble this inheritance into a coherent programme, in practice its articulation leads to a number of specific looping interests. For Choudhury et al. (2009) , a “critical” approach to neuroscience is an unmasking of the scientific “brain-fact”, a conceptual and anthropological exposure of the journey that mental phenomena take on their way to facthood, and an undermining of the political and economic contexts, media interests, lay perceptions, and so on, that are braided through that “fact” along the way ( Choudhury et al., 2009 : 65). For Campbell (2010) , a year later, a critical neuroscience is straightforwardly positioned as one “more attuned to the ‘social”’, ( Campbell, 2010 : 101). “What is at stake”, Campbell argues, is precisely “how social factors will be addressed—who is best positioned to address them through what vocabularies and with what goals?” (ibid.). For Kirmayer (2012) , a critical approach directs attentions specifically to the forms of cultural reasoning underpinned by the “neuro” prefix: critique, in Kirmayer’s account, “is our vehicle through which to focus on popular culture, as well as neuroscientific and technical rationality and their economic and political motivations” ( Kirmayer, 2012 : 367).

What binds these understandings together, and legitimizes the use of “critique” within Critical Neuroscience, is a commitment to a particular form of politics— a politics that is otherwise taken to be effaced from the rhetorical and experimental game that entangles the neurobiological mainstream. This commitment is perhaps most clearly expressed in Slaby and Choudhury’s ( 2012 ) “Proposal for a Critical Neuroscience” (2012) where they argue that that their venture “opens up a space for inquiry that is itself inherently and self-consciously political” (29). For Critical Neuroscience in general, this space is rooted in “the persuasion that scientific inquiry into human reality tends to mobilize specific values and often works in the service of interests that can easily shape construals of nature and naturalness” (ibid.). Or as they put it elsewhere in the same text, the overarching goal is to “analyze the allure and functions of the neuro in the broader scheme of intellectual and political contexts” (ibid.: 45). And again: “Critical Neuroscience should not stop at description and complexification”, being concerned instead with the “depoliticalization of scholarship in the face of the increasing commercialization of academia…a more radical and openly political positioning is needed in [the] face of these trends” (ibid.: 31).

Here, we draw an important distinction: for Critical Neuroscience, the argument is not that an a political scientific practice should suddenly become political; there is no assumption at all that there is no politics in neuroscience as it is currently constituted. Instead the argument is that: (1) a falsely de politicized rhetorical and conceptual apparatus of neuroscientific experimentation and dissemination has excised, or hidden, the inherent political inputs of the neuroscientific enterprise, in its pursuit of a neutral-looking facthood; (2) said apparatus would be better served if it began to recognize the political assumptions and priorities that are always-already in it; and (3) in particular, the same apparatus should distance itself from a misguided rhetorical game of distanced facthood, which merely draws a veil over, and thus reifies, the cultural and political biases that are always (inevitably) in the experiment. It is thus not neuroscience as such, but the (broadly understood) material-semiotic experimental game of neuroscientific facthood, and what the game is understood to be for— including its manifestations in hypothesizing, disseminating, translating and so on—that is at stake in this claim for a re-politicization of the neuroscientific enterprise.

The immediate target and direction of a re-politicized neuroscientific apparatus, and of the neurobiological fact it draws forth, takes different forms. In the “Proposal”, particular emphasis is placed on “the commercialization of research…[such that] sociological analysis can highlight the pressures that commercial, pharmaceutical, and military interests place on neuroscience”—especially given that “scientists are not usually trained to be very sensitive to the subtleties of, and social conflicts within, political and institutional environments” (ibid.: 43, 39). Elsewhere, the authors seek

a discursive space for debate both in professional and practical domains about the categories and application of neuroscience, and about related social issues such as the organization of labor, conception of health and disease, goals and practices in parenting and education, issues about law and punishment, technological self-optimization, and much more (ibid.: 40).

In other texts, authors point to the burgeoning relations between neuroscience and national security industries ( Marks, 2010 ); the growth of a pharmaceuticalised biological psychiatry ( Kirmayer, 2012 ); the removal of “the social” from conceptions of addiction ( Campbell, 2010 ); the troubling relationship between neuroscientific findings and management techniques ( Slaby, 2010 ); the neurobiologization of crime ( Choudhury et al., 2009 )—and so on. In each case, whatever the locus of attention, the call to “critique” is a call to make manifest, and then reform, the political in the neuroscientific experiment, to which extent politics is located in moments and processes where such experiments, and the rhetoric of facthood that surrounds them, have an effect on, or are taken up by, actors with commercial, governmental or other neoliberal ends, thus having reactionary outcomes within the mundane politics of crime, labor, illness, security, and justice.

That such issues look like the bread and butter of a traditionally-minded social science is not a coincidence. For it is “the social”, understood in its unreconstructed Durkheimian sense, that gathers up and justifies this remit: “critical neuroscience puts particular emphasis on the social” argue Slaby and Choudhury (2012) : 36. Against (what they see as) a dominant “actor network theory” approach within the study of scientific practices and experiments, Slaby and Choudhury insist on “the social” as “a potential explanatory resource”—allowing analysts to avoid a quietist “neutral” cartography, and instead to “penetrate beneath the surface of emerging practices, relations, and styles into the dynamics of power that may shape or stabilize surface phenomena, facilitate or hinder certain alliances or actions” (ibid.: 37; cf. Rose, 1996 ; Durkheim, 1982 [1895] ). These authors, of course, are perfectly aware that, in the early 21st century, such an approach is somewhat passé : they explicitly refuse an old-fashioned account of the social as a view-from-nowhere explanans of all human phenomena, and stress (indeed, precisely following the actor-network approach that they are elsewhere suspicious of; see Callon, 1986 ) the processual “assemblage” of this “social” within a multitude of actors and practices. But thus trying to find a way—not always clearly or convincingly—between very different ways of conjuring the social, they refuse quietist tendencies embedded in recent sociological attention to scientific practices, insisting that “the activity of assemblage, in our sense of the term is …an inherently political one” ( Slaby and Choudhury, 2012 : 38, my emphasis).

This reliance on a self-consciously “political” socio-critique in the old style is, of course, perfectly respectable—if not very fashionable. Still, an important tension comes into view here: if these authors are committed to an epistemological politics suspended somewhere between Kant and the Frankfurt School, they are aware that this way of understanding critique, and this mode of doing politics, have both been convincingly superseded within the very social science literature upon whose methods and perspectives they are so reliant (e.g., Latour, 2004 ). What is centrally at stake for Critical Neuroscience, then, is an attempt to enact a form of critical attention to the neurosciences, from the point of view of the social—at a moment in which precisely such a mode of attention has been orphaned by the intellectual practices who claim the social as their own, and who thus form the empirical and theoretical ground upon which Critical Neuroscience seeks its contribution. Perhaps the fullest expression of this tension comes in an article by Slaby (2010) . “The challenge”, he writes, “is to render ‘critique’ meaningful again in a time when this notion has fallen into disrepute in mainstream thought and theory” ( Slaby, 2010 : 410). For Slaby, Latour’s re-invigoration of scientific facthood as “matters of concern” (i.e., stable entities whose stability is nonetheless an ongoing achievement; see Latour, 2008 ) permits ( contra Latour’s own view) a re-invigoration of the role of critique—which, for Slaby (2010) , is not a mere debunking of scientific authority, but a much more generative and constructive enrichment of scientific facthood, now thickly embroiled with matters of “human interest” ( Slaby, 2010 : 411). Critique, in Slaby’s account, calls political attention, amid the neutral-looking assembly of matters-of-concern, to who or what is doing all that concerning—and to whose concern gets excluded at the same time. For Slaby, “the assembling [of] matters of concern from multiple perspectives can provide a balancing force against the monopoly of experts and specialist associations” (ibid.). Here, “context” is the lever: the task is “to reinscribe the relevant influences and multiple causal factors, point to historical trajectories, and record cultural understandings and differences” (ibid). The summoning-up of context does not efface the neurobiology of a topic like addiction, but instead provides a “much richer” account that can build upon the “meager construal” of a neuronal reductionism (ibid). The work of the critic, then, is to identify interests, to seek out pathologies of reason, to begin a transdisciplinary discussion about the normative underpinnings of these interventions—and then to include all of this in some specific neuroscientific practice or encounter (ibid.: 411–412).

Given that Latour has been invoked by these authors themselves, we may rely on his rubric to point to some of the problems here. Certainly, authors within Critical Neuroscience want critique not to be the debunking, denouncing, killjoy of yore, and they situate the critical urge as a positive, additive function in the practice of neuroscience. But they remain committed to a view of, specifically, a neuroscientific research-practice which is: (1) de-politicized through its firm rhetorical exclusion of human social and cultural worlds (of interests, ideologies, economies, and so on); but also (2) nonetheless irretrievably political, if only in the sense that, by drawing a veil across these interests and ideologies, it reifies the (troubling) contexts in which its own performance takes place. At the heart of this view, then, is an idea that the experimental game of neuroscience itself relies on a distinction between, on the one hand, things that are human, and political, and economic, and unavoidably inflected by context, and, on the other hand, things that are scientific, and biological, and embodied, and best understood in laboratories. To put it another way, the argument of Critical Neuroscience is that even if the neurobiological sciences remain, at base, deeply human, cultural and political endeavors, the atomized brain-fact, and the experimental game that produces that fact, are the products of a technified attempt to exclude, cover-over, or simply ignore that human, cultural and political base. A better—more critical—research-practice is one that calls a halt to this misguided game of exclusion, denial, veiling, and so on.

If the committed Latourian surely agrees with Critical Neuroscience that some kind of politics, or interest, or agency, is inseparable from the generation of brain-facts, we will hardly find room in her thought for the imagination of a research-practice which succeeds precisely by relying on a distinction between the human world of context and politics, and the laboratory arena of neutrality and facthood. For Latour, of course, and quite unlike the core claims of Critical Neuroscience, there is no successful or sustainable fact-producing machine, no meaningful theatre of experimentation, to which politics and interest have been rendered external; there is no God-given task for the philosopher or the sociologist to “piece together the social, political, economic, social and cultural factors involved in the development of neuroscientific insights” ( Choudhury et al., 2009 : 65). If Critical Neuroscience seeks a space for its own analysis in the “the selective attitude and methodological reductionism of experimental approaches” ( Slaby and Choudhury, 2012 : 25), Latour reminds us that there is no experimentally-generated fact, no disseminated finding, no “depoliticalized” “matter of concern”, which has not always-already been generated, sustained and circulated precisely through a careful attention to, and cultivation of, its own complex bundle of interest, conviction, care, deviousness, misunderstanding, hope, hucksterism, and so on— the careful assemblage of which is the sole guarantee of successful “facthood” in the first place . Put more simply: there is, in Latour’s account, no successful scientific object, and no potent experimental practice, to which interest, context, politics, and democracy have been successfully rendered external, except in the most trivial sense. There can be no additive function for “critique”, moreover, unless it is founded on the idea that successful experimental practices are about producing facts without, or exclusive of, or even only in denial of, context, and discussion, and politics, and interest—an assumption, in other words, that we would have better facts, and more democratic ones too, if these externalized relations were much more on the surface, much more reflected-upon, much more open to critique . This is, finally, how we must understand the critical gesture at the heart of Critical Neuroscience. Despite appeals otherwise, it is locked within a social-scientific and philosophical literature whose organizing premise is that elaboration of the political is external to the rules of the neurobiological-experimental game, at least as it currently stands. That game is what we turn to in the next section of the paper.

Re-Thinking the Experimental

Of course, experimentation has been a central object of investigation in histories and philosophies of science for some time—and we cannot do justice to a long literature here ( Kuhn, 1976 ; Bachelard, 1984 [1933] ; Rheinberger, 2001 ). In recent decades, however, Hans-Jörg Rheinberger has brought discussion of experimental maneuvers to the forefront of histories of science, stressing that research does not typically begin with the choice of a theoretical framework, but in fact with the choice of a specific technological system; for Rheinberger (2001) , it is the experimental system, and not necessarily the hypothesis or the theory, that lies at the center of the knowledge-production process ( Rheinberger, 2001 : 19, 21f.). For the psychological sciences (including social psychology) especially, with their wavering affiliations to “science” as such ( Ash, 1992 ), there are well-known accounts of these disciplines’ historical trajectory towards experimentalism, and into positive science ( Danziger, 1992 ; Greenwood, 1994 ). Danziger has stressed, nonetheless, that this trajectory is not self-evident—since none of the emerging social sciences were experimental in their methodology ( Danziger, 2000 : 331). The emergence of an experimental rubric for (social) psychology thus required the imagination of a specific—and decidedly non-neutral—relationship between method and object: for pioneering social psychologists like Floyd Allport, Danziger points out, “it was impossible to advance a credible case for a methodology of scientific experimentation on any social object without redefining that object in a nontraditional way” (ibid.: 332–3.).

Thus, early experimental psychology employed methods that were “limited to exploring effects that were local, proximal, short term, and decomposable” (ibid.: 334). These conventions were made possible, of course, by redefining prior notions of the social in social science, which in turn were based on “effects that were non-local, distal, long-term, and experimentally non-decomposable” (ibid.). As empirical investigation in social psychology developed, and in common with developments elsewhere in the psychological sciences, statistical methods moved to the forefront of experimental technique ( Porter, 1996 ). It was then a short step for statistical significance to become the experimental apparatus of choice “for decisions about the validity of psychological hypotheses” ( Danziger, 1994 : 154). Again, we skate over a great deal of complexity here—but what interests us is the role of this complex “surface of emergence” within social-psychological experimentation, especially as that rubric has more recently become entangled in the neurosciences. Cromby (2007) , for example, reminds us that experimenting with social life in the neurosciences is a process of generating fixity over contingency, of emphasizing social cause over social influence, of replacing collective representation with embodied reification, and so on: the risk for experimenters, Cromby points out, is in confusing “rigidly measured differences between experimenter-constituted groups obtained in highly artificial circumstances” with “actual social processes in everyday life” ( Cromby, 2007 : 164). As Simon Cohn (2008) shows, in social neuroscience experiments especially, the “social” gets mapped onto the brain and has to be conceptualized as a material object: the experimental focus is not on the space of interaction, at what people do or what happens between them; the interest is directed towards what happens within the individual brain ( Cohn, 2008 : 100).

These descriptions, and cautions, about the emergence of an experimental game within social psychology and neuroscience, and especially about how the rules of that game figure social life, are well taken. And yet what interests us is less the historicization or contextualization of that game as such—and more what its conventions and its rules allow us to do, and to say. At the risk of appearing naïve, in what follows we will temporarily set aside our usual suspicions and ironies about this game, and about the broader forces and relations of power that impinge upon it. If experimental facthood in neuroscience—even the idea of such facthood—is a function of context-dependent rules, then what follows here is an optimistic story about the affordance of context, and not another lamentation about its constraint . We are fundamentally interested, first, in what a highly-stylized and tightly-bounded neuroscientific experiment can actually do, and, second, in the relationship between the limits and rules of this game, and the enactment of specific kinds of statements. The term “statements” is deliberately vague, here—we use it to draw attention to the broad enunciative lifeworld of neuroscientific experiments, and to the kinds of things that particular experiments make it possible to think, and to say. Consider, in this sense, a seminal study in social neuroscience—Tania Singer et al.’s studies on pain and empathy, published in Science in 2004 ( Singer et al., 2004 ). Singer investigated how women, lying in an fMRI scanner, responded neurally to pain experienced by their romantic partners. The paper begins with a general observation: “Human survival depends on the ability to function effectively within a social context” ( Singer et al., 2004 : 1157). The authors emphasize that effectively functioning in such a context is embedded in the capacity to understand “others intentions and beliefs”—but it also requires empathy, or “being able to understand what others feel” (ibid.). The experimenters thus take 16 couples, placing one member of each in the scanner, and the other elsewhere in the room. Both are given a pain stimulus, and fMRI measures are taken from the subject in the scanner both during the reception of her own stimulus, and, separately, while she sees her partner receiving a pain stimulus. To cut a complex story short: similar brain regions were active both when the subject in the scanner received the stimulus and when she was given a signal that her partner was receiving the stimulus (recruiting brain regions associated with the affective dimensions of pain, but not the sensory dimensions).

In one sense of course, and without wishing in the slightest to denigrate this fascinating study, one might say that—aside from neuroanatomical specificity—there is a certain mundaneness to its conclusion. Who after all, is surprised to learn that emotions are contagious, or that we feel the pain of others, or that if you are romantically involved with someone, you probably know how that person feels in certain situations? But, as the experimental situation unfolds, a kind of strangeness emerges too. Because we begin with a very artificial-looking, densely-rule-bound situation: in the experimental setting, a person lying in the scanner must rate the level of pain of her loved one, whose hand she only sees on a tilted mirror. And she knows that this “pain” is only a small, controlled prick in an experimental setting; there is no major trauma. And yet, harnessing the force of its measures, this rather artificial construct of experimental rules and regulations, as applied to the empathic experience of pain, is, nonetheless, skillfully—and convincingly—traced to a very general, and very striking remark about the survival of the human species in general, and the role that social context and empathy plays in that survival—via a rather specific and carefully demonstrated observation that understanding the feelings of others has a comparable neural architecture to understanding our own feelings.

It is this dynamic of the strange and the mundane that we wish to keep in play here, and especially in our discussion of the experiment that follows. Because, in both of these studies, what appear to be highly artificial claims about very commonsensical phenomena, can nonetheless enact, legitimize, and authorize, some very striking, and potent, and not at all obvious, claims about the social world and its human inhabitants in general. The temptation for the external observer is to only focus on one half of this dynamic: but if we focus only on the strange half (on the tightly-bound, carefully-quantified artificial game that allows us to say something convincing about empathy, and about human social life in general), then we risk taking the whole thing rather too seriously: it is a powerful biological reductionism making very grand claims; it quantifies and neurobiologizes human social life; it locates evolutionary history in romantic partnership, and so on. On the other hand, if we only focus on the mundane aspects of the experiment, we run the opposite risk—we do not take it seriously enough, insisting that all it does, really, is reproduce, in a highly complex technocratic language, in neatly quantified and modeled form, something that of course we have already known for many years. What we want to do, in our consideration of the experiment that follows, is hold onto both halves of this binary, and focus on the dynamic between them. What would happen if we took the experiment just seriously enough? Could we learn to see it as a game, with particular rules, and constructs, and rituals, that—played well—sometimes allows the generation, and then also the legitimation, of some very remarkable statements? Could we see it as a game in which the rules and conventions might be manipulated, and played-around-with, such that other kinds of statements might become possible? Most importantly: could a critical understanding of an experiment be one that seeks to understand, to replicate, and even to admire, the style of an experimental game that makes a particular statement possible? Might we even, pursuing this inversion, describe a naïve analysis of an experiment as one that takes its own sudden awareness of the game to be both the start- and end-point of considered investigation?

By a game “played well”, here, we refer only to an experimental demonstration made convincingly and sustainably—i.e., one that has achieved force, or gained strength, to draw again on the Latourian vocabulary ( Latour, 1988 : 158–162). Our interest is in the relationship between such demonstrations and the possibility of subsequent “statements”—statements, for example, as above, about the entanglement of social relationships, affect, and human survival; or, indeed, as we will discuss below, about the connections between sociality, co-operation, communication, and success. We stress that we are not trying to construct a timeless or universal rubric for the analysis of experiments—questions about power, about the implications of experiments, about historical context and so on, surely remain pertinent. What we add here, much more modestly, is just another way of thinking about neuroscientific experiments—one that is not currently prominent within the critical literature, and yet that might (this is the core gambit of the paper) have the capacity to significantly diffract how we imagine the experiment’s relationship to critique, and to politics.

An Optimally Critical Experiment

The OIM experiment was conducted in Aarhus (Denmark) by Bahador Bahrami and a group of researchers affiliated with Aarhus University and University College London ( Bahrami et al., 2010 ). The purpose of the experiment was to determine the information-sharing conditions under which a pair of individuals, co-operating to make perceptual judgements about abstract visual stimuli, might outperform the judgements of the best individual in that pair; the authors were interested in “how signals from the same sensory modality (vision) in the brains of two different individuals, could be combined in a social interaction” (1081). Stated more plainly, the purpose of the experiment was—as the abstract suggested—to test the truth of a well-known cliché: “are two heads really better than one?” (ibid.).

In this section, we use this experiment to illuminate the ways in which dynamics between the strange and the mundane can be enlivened within the experimental space of a contemporary neuroscience. But we also dwell on this experiment because it helps to show, perhaps under the bland surface of much social psychology and social neuroscience, the potential for some unexpected traffic between enactments of “the social” and rhetorics of “critique”—a traffic that, we claim, is occluded by the insistence on a rigid separation between these domains. In some ways, the question posed by the OIM experiment could scarcely have been more humdrum. At the same time, the experiment also significantly rippled the epistemological surface from which it grew—such that it has now come to play an important critical role in relation to that background, and in particular to the way that it has both imagined and enacted specific iterations of “the social”. There is hardly space here to give an adequate account of the histories of social and behavioral psychology (for which see e.g., Danziger, 1992 ; Greenwood, 1994 )—nor, indeed, can we do justice to the wavering history of “the social” within the economic, behavioral, psychological, and social sciences ( Rose, 1996 ), or to the often subterranean critical psychologies that have quietly torqued this history ( Burman, 1994 ). Our goal is slightly different: in pursuit of some critical imperatives that are already in contemporary neuropsychology experiments, we take the OIM experiment as exemplary of a specific field of possibility. This will require some unavoidably broad historical brushstrokes, in order to give a sense of the wider surface of emergence.

Among the foundational themes of 20th century, Anglo-American empirical social psychology was a concern with how peoples’ judgements and behaviors were influenced though interactions with others ( Allport, 1920 ; see Parkovnick, 2000 , for a contextualization of this programme). If the “social” in “social psychology” has not always been clear ( Greenwood, 1994 ), and indeed, social psychology itself an often disunited science of differentially “social” traditions ( Good, 2000 )—nonetheless, we might broadly see, within the advent of a “social” psychology, the assemblage of experimental and conceptual spaces for imagining forms of motive and intent beyond the individual. But this relationship has not always made for happy inferences. Solomon Asch’s (1952 , 1956 ) experiments on group pressure and conformity are perhaps some of the best-known exemplars of this phenomenon: Asch presented groups of participants with cards showing a reference line alongside a set of comparisons, and asked members of the group to report which of the comparison lines was equal in length to the reference. Of course, as is now well known, each group contained only one real participant; the rest of the group was made up of experimental confederates, instructed to give obviously false judgements on a subset of pre-ordained trials. Asch found that participants often went along with the group consensus, even though it was obviously wrong, even though the confederates were all strangers, and even though there were no incentives to conform and no penalties for defecting. What was particularly striking about Asch’s experimental results is their suggestion that subsuming one’s own judgement to that of the group might not be strategic; that it might have no moral or material end—that it might be something much more banal, a kind of path of least resistance in the face of social influence.

As a metonym for social psychology, Stanley Milgram’s (1963 , 1965 ) obedience experiments perhaps loom even larger in the popular imagination than Asch’s work. In Milgram’s studies, people were told that they were participating in an experiment on learning, and that they were to complete the task with another person. Again, as is now well known, each participant was told that she was assigned the role of “teacher”, while the other participant—who was in fact an actor—was assigned the role of “learner”. The experiment consisted of a series of trials in which the teacher was required to quiz the learner: for every mistake the learner made, the teacher was to administer (what they thought was) an increasingly powerful electric shock. The finding that has become most centrally associated with these experiments is that a majority of people were willing to administer the electric shocks, beyond an apparently safe level, when asked to do so by an authoritative other. What interests us, however, is the way in which Milgram’s studies extended the normative scope of the statements produced by Asch’s: people did not merely conform to social influence; they did so even to the point of causing serious harm to others. “The social psychology of this century”, Milgram (1974) himself would later reflect, “reveals a major lesson: often it is not so much the kind of person a man [sic.] is as the kind of situation in which he finds himself that determines how he will act” ( Milgram, 1974 : 205).

Clearly, there is much to be said, here, about the mid-century and post-war historical context of these studies, and their intense focus on “intergroup relations, leadership, propaganda, organizations, political (e.g., voting) behavior, economic (e.g., consumer) behavior, and environmental psychology” ( Pepitone, 1981 : 977). Within such a space, it is not hard to understand how scholars may have focused heavily on the susceptibility of people to the form of persuasive messages over content ( Tversky and Kahneman, 1981 ); the fragility of the links between people’s convictions and their actions ( Darley and Batson, 1973 ); the impossibility of individual, conscious free will ( Wegner, 2002 ), and so on. At stake in all of these experiments are two core claims: (1) that some motive force outside of, or beyond, the individual, might drive particular instances of actions and belief; and (2) that when people form a group to interpret uncertain information, reach decisions, and plan for action, the outcome is often not good. We situate these interests not only in a post-war concern with propaganda and inter-group relations, but also in wider social and economic trends, manifested not least in the psychological sciences, that, at least for much of this period, elevated the individual over the collective: as Danziger (1994) again reminds us, throughout the 20th century, much institutionalized, mainstream psychological research has gradually found itself in agreement with the late-capitalist notion of an “independent individual for whose encapsulated qualities all social relations are [or should be] external” ( Danziger, 1994 : 296).

Over time, of course, this marginalization of the positive qualities of social interaction—what has been called the “negative bias” in social psychology ( Sheldon and King, 2001 )—has been drawn into question (It is worth nothing that this bias does not exist everywhere: different views on the virtues of collectivity and teamwork have long persisted especially in the organizational and management literatures. See Guzzo and Shea, 1992 , for an overview). This “negative bias” is precisely the context in which we wish to understand the OIM experiment, which tested a cliché—that two heads are better than one—that was nonetheless at odds with that tendency. Pairs of participants were asked to look at two sets of visual stimuli, presented in sequence. In one set, the contrast of one of the elements was slightly higher. The task was to identify which set—the first or the second—contained the stimulus with the higher contrast; participants first performed the task alone, after which they were forced to make a joint decision. The rules of the experimental game allowed for four models of information sharing: (1) When participants disagreed in their judgements, only the isolated individual judgements were shared, and the joint decision was decided randomly (coin-flip or CF model); (2) Individual judgements were again shared, but the pair learned (from feedback) which person had the better track record, and weighted their joint decision accordingly (behavior and feedback or BF model); (3) Participants shared not only their individual decisions, but also their degree of confidence in their decisions, and weighed up their joint decision (weighted confidence sharing or WCS model); and (4) Participants directly shared the parameters of their sensory representations, as if these representations were somehow transmittable through a direct neural channel (direct signal sharing or DSS model).

Without wishing to go into excessive detail—the four models were fully specified quantitatively, and, within the contrived experimental context, made distinct quantitative predictions, which could straightforwardly be distinguished. With all trials completed, the experimental data ultimately supported the WCS model, showing that although people cannot communicate their perceptions directly, they can and do improve their performance by comparing their confidence about those perceptions. The implications of this finding were striking: not only could people effectively communicate the certainty of their judgements, this ability allowed them to outperform the best member of the pair (The only catch was that people’s individual perceptual sensitivities should not be too different from one another, and they should be fairly accurate in evaluating how confident they were about their own judgements). So contrary to the negative view of social interaction prevailing in empirical social psychology at the time, two heads really can be better than one. The qualities of the “independent individual” have perhaps been radically over-stated.

Of course, this is only one experiment—a single case study—and on its own it hardly moves the entire field of social cognitive neuroscience towards a more expansive view of “the social” as such (at least in the view of commentators who find the neurosciences reductionistic on this score). Nor does it suffice to demonstrate the over-arching point we’re gesturing at. Indeed, many will point out that even here, irrespective of the result, what is at stake is still some kind of “biologization” of the social—and that that (including all the imperatives that are contained within it) should be the focus of critical attention. Others will argue that OIM is only another kind of falsification—that it leaves the broader socio-political contours of the discipline untroubled. But, as we argued at the beginning, what interests us here is neither a dramatic revision of Critical Neuroscience nor some major event in social psychology and neuroscience. Our goal is a more modest one: we only want to suggest that, looked at in particular sort of way, and interpreted in terms of a specific history, OIM might be indicative of a more nuanced relationship between critique and experiment than has yet been allowed in these literatures. We do not say that there is anything about experimental design—still less the specific design of this experiment, which may well have produced a different kind of finding—that guarantees a “progressive” or a “critical” result (nor, of course, was there intended to be). What we are ultimately trying to show is only that the rules of the neuroscientific and psychological experimental game—and in particular that game’s insistence on associating particular kinds of facthood with particular kinds of distance—are not always the province of an unreflexive, reactionary cabal; that there can be neuroscientifically-wrought, biologically-correlated, methodologically-conservative factual imperatives, which seem to make some more—for want of a better word—“progressive” politics possible.

Perhaps our conclusion over-extrapolates from the data. But that is partly the point. Because this experiment not only moves us away from a negatively-biased social psychology more broadly—it legitimates and actualizes a whole series of other statements, both about human decision-making and about the virtues of collectivity in general. Indeed, this set of statements was subsequently expanded by experiments that pushed at the edges of this claim, showing how, in the scenario investigated by Bahrami et al. the collective benefit of the group emerges over time ( Bahrami et al., 2012 ), and how this phenomenon is based on increased alignment in the language used to express confidence ( Fusaroli et al., 2012 ). Moreover, if at a meta-theoretical level, the goal of the experiment was to reveal strengths in social interaction, at a methodological level the experiment was rigorously designed, and it precisely tested competing quantitative models. Thus, if it was critical of the way that social psychology had conjured the virtues of interaction, it made this critique entirely from within the confines of the social-psychological experimental game itself. Again, if it is only one case, the OIM study might be interpreted in terms of a broader, revised invocation of “the social” within social psychology; but it has also had an impact beyond these discussions –having been covered extensively by mainstream media outlets, and forming the basis for new experimental and meta-theoretical discourses about the virtues of information-sharing and open argumentation (e.g., Kanai and Banissy, 2010 ; Frith, 2012 ; Mercier and Sperber, 2012 ).

Before we concretize our analysis, let us add some important qualifiers to this story. Needless to say, first, this study might have gone another way; it might well have ended up re-enforcing the same conservative assumptions about the virtues of the private individual. But what’s interesting to us is that it didn’t: what captures our attention is the realization that such a conservative tendency is no special ally of the context-denying experimental game. Our core claim is that OIM is an example of a study which has not sought to examine its own biases and prejudices; nor has it attempted to disrupt the overt exclusion of political and economic contexts from experimental spaces; this is an experiment, by contrast, whose adherence to the internal rules of the experimental game of the neurosciences, whose delicate manipulation of the accepted and canonical parameters for these kinds of interventions, whose sensitivity to the dynamics of banality and strangeness within neuroscientific experimentation, has allowed it to enact, propel, and legitimate, what might otherwise be regarded as a progressive—even critical—intervention. But here we add a second caution: if our claim for the critical nature of this intervention relies on a shift in the psychological gaze from the capacities of the individual to the virtues of the collective, there is also a literature that locates the governing power and force of contemporary capitalism precisely in a shift to the efficiencies of flexible networks, and networked societies ( Sennett, 1999 ; Castells, 2000 ; Boltanski and Chiapello, 2007 ). Through such a lens, OIM might be read as a demonstration—even a biologization—of the productive efficiency of networked collectivity (see especially Hartmann, 2012 , on the relationships between neuroscience and network capitalism). In this paper, however, we do not claim that to move from the individual to the collective is necessarily “progressive” or “good”—experiments, Rheinberger (1994) reminds us, often have many stories to tell. Our story is bound up with the psychological elevation of the private individual ( Danziger, 1994 ), an historical focus in that discipline on the negative effects of group interaction ( Pepitone, 1981 ), and contemporary strategies for governance that still rely on the imagination of an atomized, psychologized, individual subject (see Slaby, 2010 : 407, for a potent, neurobiologically-inflected example). It is in terms of this—still present—history that we want to interpret OIM as a critical intervention, and as a resource for (even an ally of) those who still see much to critique in such formations. Of course this alliance remains always in potentia ; it is precisely the potentiality of OIM, and the forms of alliance that may be drawn with and through it, that we direct attention to here.

Enacting the Critical

We have argued that the intellectual force of Critical Neuroscience turns on a very specific relationship between “critique” and “neuroscience”. We suggested, further, that re-focusing on the specifically experimental spaces of the new brain sciences might help us to re-think this relationship. And we described one case where the parameters of an experimental game produced and legitimated some potent, critical statements form within a social neuropsychology. In this final section, we draw out four implications of that account.

We suggested earlier that at least three factors distinguish Critical Neuroscience: (1) it insists on a deeply classical notion of “critique”, specifically as a self-consciously emancipatory, anti-commercial, and anti-capitalist socio-critique, rooted in the “historico-political” mission of the Frankfurt School ( Slaby and Choudhury, 2012 : 29); (2) it argues that socio-critique is something (currently; not necessarily) exterior to the experimental practices of the new brain sciences (if not to those sciences as such); and (3) it proposes further that the “brain fact” is but one node within a closed loop of mental phenomena, media representations, political and economic contexts, and so on, and that critical intervention is thus “ studying the journey of a phenomenon in and around the neuroscience lab”—but it is never that journey itself ( Choudhury et al., 2009 : 64. Our emphasis).

We are much in sympathy with these arguments. But we use the OIM experiment to show that, even if we wish to hold onto such a notion of “critique”, we do not necessarily need the form of inquiry that these authors propose. The OIM experiment shows that, sometimes, nothing more than playing within the conventions and procedures that make up the experimental setting is required for the enactment of a prominent, and widely-publicized critique of—in this case—the deeply troubling, and intensely political, individualizing tendencies of much contemporary psychology and neuroscience. Unquestionably, the experiment, published in Science , created a very traditional, context-denying, unimpeachably “scientific” new “brain-fact”. And yet still this modest fact makes many strikingly critical statements possible—even necessary—within the new brain sciences: human beings often work sub-optimally when they work alone; collectivity trumps individualization, at least under some conditions; the communication of evidence is a delicate and subtle process; accurate knowledge is less an attribute of the single individual than a product of careful co-operation—so, on and on, go the claims that can be made. We argue that, irrespective of our notion of critique, we still need to extend our notion of a critical apparatus beyond an externally-focused, experimentally-indifferent, text-based form of (sometimes) scholastic nit-picking. Critical Neuroscience, if it is to further its contribution, might recognize that critique can be in brain-facts too.

There is an odd correspondence between, on the one hand, some of the central assumptions of Critical Neuroscience and, on the other, general enthusiasm about the power and force of the contemporary neurosciences (e.g., Lynch and Laursen, 2010 ). If on first glance these seem like diametrically opposed literatures, still neither of them sees the neurobiological experiment, and the formal rules that demarcate it, as a way of potentially doing political critique ( Fitzgerald and Callard, Forthcoming ). Critical neuroscience is insistent that the formal laboratory-space of the contemporary neurosciences, and the regulations that govern it, are—rightly or wrongly; accurately or mistakenly—a more-or-less politics-free zone ( Slaby, 2010 : 406; Slaby and Choudhury, 2012 : 39). By contrast, we have drawn on the OIM experiment to show that politics might sometimes be found in strange places (even occasionally outside of Frankfurt); that it can be expressed through some unexpected experimental practices (even in some quantitatively-specified neuroscientific modeling techniques).

Defenders of Critical Neuroscience might here say that we have missed the point; the whole purpose of these authors’ intervention has been to say that the seclusion from the political is merely rhetorical, that the experiment is of course always-already profoundly ideological ( Slaby and Choudhury, 2012 : 31). The main issue, moreover, is not a lack of politics, but the wrong politics—neuroscientific experiments are laid low by the “surprising parallel” between “cutting-edge neuroscience”, “organizational and management literature” and “neoliberal politics” ( Slaby, 2010 : 405). But the OIM experiment allows us to come at this relationship from quite a different angle—one that is intended to illuminate, rather than dispute, its characterization in the Critical Neuroscience literature. First, via its critique of the individualizing tendencies of social psychology, it shows us concretely how even the “depoliticalized” rhetorical game of experimentation might not be as demure as it first appears—that a carefully assembled, straightforward neuroscientific experiment can make a (potentially) critical intervention. So if we want to understand the relationship between politics and neuroscience, we will need some detailed account of experimental practices, and of the kinds of claims that those practices make possible. Second, it shows us how the politics expressed through this game can serve some “progressive” ends, such as provincializing the negative bias towards human sociality within contemporary psychology, and enabling a stream of experimentation on the potency and primacy of interaction and cooperation—a research-finding arguably quite inimical to at least some contemporary forms of neoliberal politics (but note again our qualifiers to this conclusion, discussed above). To realize this potency, you need to know the rules of the game—but you also need to know how those rules might be used for subtle and surprising ends. In order to understand the relationship between neuroscience and the potential for a more emancipatory, collective claim, you could do a lot worse than pay attention to classically-constituted psychophysical experiments.

Re-positioning experiments in their “context” is central to the mission of Critical Neuroscience: scientists should “be involved in the analysis of ‘contextual’ factors”, ( Slaby, 2010 : 397); “neuroscientists need to critically examine scientific practices and institutions as well as the wider social contexts within which they work” ( Choudhury et al., 2009 : 65); “the gathering of context in many cases may end up laying bare the economic and political imperatives that sustain particular styles of thought” ( Slaby and Choudhury, 2012 : 35). Over and over again, reversion to “context” is positioned as the weapon wielded by the critical imperative: it opens up the black-box of experimentation, allowing experimenters to see the ideological constraints within which they produce and disseminate knowledge. But we have tried to show that the relationship between the experiment and its context is not so straightforward—that the elision of context is not always a mark of deficit.

What makes experimental demonstration so potent, of course, is that the rules of the game require everyone to pretend that context doesn’t exist—that facts are somehow independent of the alarmingly human and social circumstances in which they have been assembled ( Latour, 1987 ). We have no interest, here, in re-running philosophical or social-scientific debates about the structures of experiment (for which see e.g., Moghaddam and Harré, 1992 ; Shapin, 2010 ; Rheinberger, 2011 ). But we are interested in seeing how the formal elision of context (whether or not we agree with it; whether or not we think it’s ever actually enacted in practice) allows researchers to do and say particular kinds of things—and not all of them bad, or reactionary. What we have tried to show with the OIM demonstration is that the context-denying performance of an experimental game does not have to traduce the critical imperative—that it can, in fact, enact and legitimate a whole slew of critical interventions. More importantly, perhaps, to the extent that neuroscientific facts are (of course) always-already embedded in a context—then the interesting question might be less “how can we use ‘context’ to destabilize the facts that already exist?”; and more, “how can we play with context—including hiding it, denying it, excluding it—to facilitate the generation of facts that we want to bring into being ourselves ?” ( cf . Latour, 2010 ). It is an attention to experiment that is the radical gesture, here; not context. And it is precisely that quality of attention that allows these scholars to enact a Science -sanctioned sign of resistance to conventional thinking in the field ( Fleck, 1979 ; cf . Roepstorff, 2002 ).

So how, finally, are we to think about neuroscientific experiments? More specifically: what should our attitude towards them be? How seriously should we take them? As we suggested earlier, an odd feature of much external commentary on neuroscientific experiment is its suspension between two poles: on the one hand, experiments are taken very seriously indeed. This is the experiment as a kind of reductionist terror, producing highly technified, publicly-valued, severely-reduced studies, which are “thought to be assuming the role of guidance in many people’s lives, both practically and through being incorporated into their self-understanding” ( Choudhury et al., 2009 : 63). On the other hand, experimental knowledge is not taken seriously at all. This is the experiment as thinly-veiled word-game, one whose thin, mediatized claims are easily taken apart once we “scrutinize and lay bare scientific conventions that are taken for granted, [as well as] tacit knowledge, [and] vested interests at work in neuroscience research or their impacts on people” ( Slaby and Choudhury, 2012 : 39). These are caricatures of complex arguments, of course—but they help us to draw attention to this strange back-and-forth between terror and dismissal.

By drawing attention to the OIM experiment, we argue for a Goldilocks approach within the sociology and philosophy of neuroscience: experiments should be taken just seriously enough . Throughout this paper, we have described the OIM experiment’s adherence to the “rules of a game”, and the care with which it arranges the props, tools, conventions, apparatuses and devices that make neuroscientific knowledge possible. But we also think that rules, games, tools, conventions and props are non-trivial things. To recognize this experiment as a clever move, within a specific game, is to both take it for what it is, and value it for what it is. We have shown how it was precisely an awareness of, and adherence to, the rigors and rules of the experimental game that allowed these scientists to intervene in unexpected, critical ways, and even to generate new “progressive” claims about collectivity and individuality. Indeed, we want to suggest that not only do experiments not have to be (either scientifically or politically) perfect to be interesting; it is precisely because they are not perfect that they are interesting ( Rheinberger, 1994 ). The OIM experiment takes the experimental game just seriously enough to have its claims treated as consequential and impactful; but not so seriously that it cannot play within the taken-for-granted rules of the game, that it cannot thereby reach out to broader political and social landscapes, that it cannot produce other kinds of strange statements too. Not for nothing does the paper conclude its sober analysis about success in locating good evidence between individuals with the observation that “we know all too well about the catastrophic consequences of consulting ‘evidence’ of unknown reliability on problems as diverse as the existence of weapons of mass destruction and the possibility of risk-free investments” ( Bahrami et al., 2010 : 1084). Being able to make such a claim, in a journal like Science , is precisely what we refer to when we urge commentators to learn to take experiments “just seriously enough”.

So what should be the Critical in Critical Neuroscience? After all, much as the Critical Neuroscientists look upon neuroscience itself, we are not invested in tearing down Critical Neuroscience—merely inviting it to reflect on the biases and assumptions inherent in its own approach, asking it to examine the forms of politics it enacts, and encouraging it, in its turn, to consider especially the methodological assumptions through which it enacts them. We have tried to show that, in the rush to critique and to reform neuroscience; in the desire to remake it as a more socially and politically incisive discipline; in the stern denunciation of its ties to managerial, pharmaceutical and other spaces of neoliberal accumulation; in the demand that it be reflexive, and self-critical, and more obviously and openly aware of its own surroundings—in all of this, broader questions about how critique might be enacted within the mundane rules of the contemporary neuroscientific game have been missed. We have argued that, contra the literature up to now, the first question for a specifically critical neuroscience needs to be: what can neuroscience do? We do not here offer any kind of comprehensive answer. But what we have tried to gesture at is the realization that if neuroscience can help to govern and surveil us; if it can pathologize us and reduce us to our biological parts; if it can induce us to buy drugs, nudge us to change our behavior, and help us to become more rigidly bourgeois in our parenting—then surely it can also help us to imagine and enact ourselves, and our societies, and the political, economic and cultural assumptions on which those societies are organized, in some more interesting and hopeful ways too. We have demonstrated just one minor instance of how this might work, and how a critical political statement can be made possible entirely within the constraints of contemporary experimental practice. But our central contribution has been to argue that the critical imaginary of Critical Neuroscience, if it does not require radical reform, at least needs a broader sense of its own possibility. We have urged attention to the critical impetus of experiment, here—but there are likely many similar arguments waiting to be made.

At the end of their 2009 programmatic contribution, Choudhury et al. point out that the neurosciences are making progress further and further into areas that were once dominated by the humanities, the social sciences, the clinical arts, and so on (73–74). But their own goal, they stress, is less keeping the neurosciences out of these domains, but in creating the ground for “critical engagement” that will ultimately “drive new ideas for experiments in neurosciences” (ibid.). They wish to show sociologists and anthropologists how they, too, might help “to influence the shape of future research in neuroscience” (ibid.). Here, we are in total agreement with Choudhury et al. But our suggestion is that such engagement is unlikely to come from an instance on implacable context; indeed, the central gambit of this paper is that it might be more “experiment”, and less “context”, that forms the meeting-ground between neuro-biological and socio-political research practices in the 21st century. We join with our colleagues in philosophy and the social sciences, and even in critical theory, who are interested in the practices and effects of the neuroscientific laboratory. But what we want to stress is that, if they do cross the experimental threshold, it might not be so unthinkable for such scholars to just run with the rules of the game as they find them. In other words, it might not be so terrible to go into the experiment, and to leave the world where it is for a moment; to set aside an otherwise valuable attention to “politics”, and “interest”; to leave the workings of “society” and “culture” where they are; and finally, even if only temporarily, to (quietly) close the laboratory door.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

For insight on the core issues of this paper, we are especially grateful to Uffe Juul Jensen, who shared in the discussions that eventually produced this manuscript, and who provided much important critical commentary on its progress. Any errors remain the authors’ own.

This paper was produced under the auspices of the Mindlab “Evidence” project at Aarhus University, part of the Danish ministry of science’s UNIK initiative.

Allport, F. H. (1920). The influence of the group upon association and thought. J. Exp. Psychol. 3, 159–182. doi: 10.1037/h0067891

CrossRef Full Text

Andreasen, N. C. (2001). Brave New Brain: Conquering Mental Illness in the Era of the Genome. Oxford: Oxford University Press.

Asch, S. E. (1952). “Group forces in the modification and distortion of judgments,” in Social Psychology , ed S. Asch (Englewood Cliffs, NJ: Prentice Hall), 450–501.

Asch, S. E. (1956). Studies of independence and conformity: a minority of one against a unanimous majority. Psychol. Monogr. Gen. Appl. 70, 1–70. doi: 10.1037/h0093718

Ash, M. G. (1992). Historicizing mind science: discourse, practice, subjectivity. Science in Context. 5, 193–207. doi: 10.1017/s0269889700001150

Bachelard, G. (1984 [1933]). The New Scientific Spirit. Boston: Beacon Press.

Bahrami, B., Olsen, K., Bang, D., Roepstorff, A., Rees, G., and Frith, C. (2012). Together, slowly but surely: the role of social interaction and feedback on the build-up of benefit in collective decision-making. J. Exp. Psychol. Hum. Percept. Perform. 38, 3–8. doi: 10.1037/a0025708

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bahrami, B., Olsen, K., Latham, P. E., Roepstorff, A., Rees, G., and Frith, C. D. (2010). Optimally interacting minds. Science 329, 1081–1085. doi: 10.1126/science.1185718

Boltanski, L., and Chiapello, E. (2007). The New Spirit of Capitalism. London: Verso.

Burman, E. (1994). Deconstructing Developmental Psychology. London: Routledge.

Callon, M. (1986). “Some elements of a sociology of translation: domestication of the scallops and the fishermen of saint brieuc bay,” in Power, Action and Belief: A New Sociology of Knowledge? , ed J. Law (London: Routledge), 196–223.

Campbell, N. D. (2010). Toward a critical neuroscience of ‘addiction’. Biosocieties 5, 89–104. doi: 10.1057/biosoc.2009.2

Castells, M. (2000). Materials for an exploratory theory of the network society. Br. J. Sociol. 51, 5–24. doi: 10.1080/000713100358408

Choudhury, S., Nagel, S. K., and Slaby, J. (2009). Critical neuroscience: linking neuroscience and society through critical practice. Biosocieties 4, 61–77. doi: 10.1017/s1745855209006437

Cohn, S. (2008). Making objective facts from intimate relations: the case of neuroscience and its entanglements with volunteers. Hist. Human Sci. 21, 86–103. doi: 10.1177/0952695108095513

Cromby, J. (2007). Integrating social science with neuroscience: potentials and problems. Biosocieties 2, 149–169. doi: 10.1017/s1745855207005224

Danziger, K. (1992). The project of an experimental social psychology: historical perspectives. Sci. Context 5, 309–328. doi: 10.1017/s0269889700001204

Danziger, K. (1994). Constructing the Subject: Historical Origins of Psychological Research. Cambridge, UK: Cambridge University Press.

Danziger, K. (2000). Making social psychology experimental: a conceptual history, 1920–1970. J. Hist. Behav. Sci. 36, 329–347. doi: 10.1002/1520-6696(200023)36:4<329::aid-jhbs3>3.0.co;2-5

Darley, J., and Batson, C. D. (1973). From Jerusalem to Jericho: a study of situational and dispositional variables in helping behaviour. J. Pers. Soc. Psychol. 27, 100–108. doi: 10.1037/h0034449

de Boer, K., and Sonderegger, R. (2012). Conceptions of Critique in Modern and Contemporary Philosophy. London: Palgrave Macmillan.

Dumit, J. (2004). Picturing Personhood: Brain Scans and Biomedical Identity. Princeton, NJ: Princeton University Press.

Durkheim, É. (1982 [1895]). The Rules of Sociological Method: And Selected Texts on Sociology and its Method. England: Palgrave Macmillan.

Fitzgerald, D., and Callard, F. (Forthcoming). Social science and neuroscience beyond interdisciplinarity: experimental entanglements. Theory Cult. Soc.

Fleck, L. (1979). Genesis and Development of a Scientific Fact. Chicago: Chicago University Press.

Frith, C. (2012). Consciousness: why bother? The Guardian. Available at: http://www.theguardian.com/science/2012/mar/02/consciousness-why-bother

Fusaroli, R., Bahrami, B., Olsen, K., Roepstorff, A., Rees, G., Frith, C., et al. (2012). Coming to terms: quantifying the benefits of linguistic coordination. Psychol. Sci. 23, 931–939. doi: 10.1177/0956797612436816

Good, J. M. (2000). Disciplining social psychology: a case study of boundary relations in the history of the human sciences. J. Hist. Behav. Sci. 36, 383–403. doi: 10.1002/1520-6696(200023)36:4<383::aid-jhbs6>3.0.co;2-l

Greenwood, J. D. (1994). Realism, Identity and Emotion: Reclaiming Social Psychology. London: Sage.

Guzzo, R. A., and Shea, G. P. (1992). “Group performance and intergroup relations in organizations,” in Handbook of Industrial and Organizational Psychology , (Vol. 3), eds M. D. Dunnette and L. M. Hough, 2nd Edn. (Palo Alto: Consulting Psychologist Press), 269–313.

Hartmann, M. (2012). “Against first nature: critical theory and neuroscience,” in Critical Neuroscience: A Handbook of the Social and Cultural Contexts of Neuroscience , eds S. Choudhury and J. Slaby (London: Wiley-Blackwell), 67–84.

Honigsbaum, M. (2013). Human brain project: Henry Markram plans to spend €1bn http://www.theguardian.com/science/2013/oct/15/human-brain-project-henry-markram

Kanai, R., and Banissy, M. (2010). Are two heads better than one? It depends. Scientific American. Available at: http://www.scientificamerican.com/article/are-two-heads-better-than/

Kirmayer, L. J. (2012). “The future of critical neuroscience,” in Critical Neuroscience: A Handbook of the Social and Cultural Contexts of Neuroscience , eds S. Choudhury and J. Slaby (London: Wiley-Blackwell), 367–383.

Kuhn, T. S. (1976). Mathematical vs. experimental traditions in the development of physical science. J. Interdiscip. Hist. 7, 1–31. doi: 10.2307/202372

Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.

Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press.

Latour, B. (2004). Why has critique run out of steam? from matters of fact to matters of concern. Crit. Inq. 30, 225–248. doi: 10.1086/421123

Latour, B. (2008). What is the Style of Matters of Concern? Two Lectures in Empirical Philosophy. Amsterdam: Van Gorcum.

Latour, B. (2010). An attempt at a compositionist manifesto. New Lit. Hist. 41, 471–837, 490. doi: 10.1353/nlh.2010.0022

Lynch, Z., and Laursen, B. (2010). The Neuro Revolution: How Brain Science is Changing our World. New York: St. Martin’s Press.

Malabou, C. (2008). What Should We Do with Our Brain? New York: Fordham University Press.

Markoff, J., and Gorman, J. (2013). Obama to unveil initiative to map the human brain. The New York Times. Available at: http://www.nytimes.com/2013/04/02/science/obama-to-unveil-initiative-to-map-the-human-brain.html

Marks, J. H. (2010). A neuroskeptic’s guide to neuroethics and national security. AJOB Neurosci. 1, 4–12. doi: 10.1080/21507741003699256

Martin, E. (2000). AES presidential address - mind-body problems. Am. Ethnol. 27, 569–590. doi: 10.1525/ae.2000.27.3.569

Martin, E. (2004). “Talking back to neuro-reductionism,” in Cultural Bodies: Ethnography and Theory , eds H. Thomas and J. Ahmed (Oxford: Blackwell), 190–211.

Martínez Mateo, M., Cabanis, M., Cruz de Echeverría Loebell, N., and Krach, S. (2013). On the role of critique for science: a reply to Bao and Pöppel. Neurosci. Biobehav. Rev. 37, 723–725. doi: 10.1016/j.neubiorev.2012.11.006

Mercier, H., and Sperber, D. (2012). “Two heads are better” stands to reason. Science 336:979. doi: 10.1126/science.336.6084.979-a

Milgram, S. (1963). Behavioral study of obedience. J. Abnorm. Psychol. 67, 371–378. doi: 10.1037/h0040525

Milgram, S. (1965). Some conditions of obedience and disobedience to authority. Hum. Relat. 18, 57–76. doi: 10.1177/001872676501800105

Milgram, S. (1974). Obedience to Authority: An Experimental View. London: Tavistock.

Moghaddam, F. M., and Harré, R. (1992). Rethinking the laboratory experiment. Am. Behav. Scientist 36, 22–38. doi: 10.1177/0002764292036001004

Ortega, F., and Vidal, F. (2007). Mapping the cerebral subject in contemporary culture. RECIIS: Electron. J. Commun. Inf. Innovation Health 1, 255–259. doi: 10.3395/reciis.v1i2.90en

Parkovnick, S. (2000). Contextualizing Floyd Allports’s social psychology. J. Hist. Behav. Sci. 36, 429–441. doi: 10.1002/1520-6696(200023)36:4<429::aid-jhbs8>3.0.co;2-q

Pepitone, A. (1981). Lessons from the history of social psychology. Am. Psychologist 36, 972–985. doi: 10.1037//0003-066x.36.9.972

Pickersgill, M., Cunningham-Burley, S., and Martin, P. (2011). Constituting neurologic subjects: neuroscience, subjectivity and the mundane significance of the brain. Subjectivity 4, 346–365. doi: 10.1057/sub.2011.10

Porter, T. M. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton, NJ: Princeton University Press.

Rheinberger, H.-J. (2001). History of science and the practices of experiment. Hist. Philos. Life Sci. 23, 51–63.

Pubmed Abstract | Pubmed Full Text

Rheinberger, H.-J. (1994). Experimental systems: historiality, narration and deconstruction. Sci. Context 7, 65–81. doi: 10.1017/s0269889700001599

Rheinberger, H.-J. (2011). Consistency from the perspective of an experimental systems approach to the sciences and their epistemic objects. Manuscrito 34, 307–321. doi: 10.1590/S0100-60452011000100014

Roepstorff, A. (2002). Transforming subjects into objectivity. An ethnography of knowledge in a brain imaging laboratory. FOLK, J. Dan. Ethnographic Society 44, 145–170.

Roepstorff, A., and Frith, C. (2012). Neuroanthropology or simply anthropology? Going experimental as method, as object of study and as research aesthetic. Anthropol. Theory 12, 101–111. doi: 10.1177/1463499612436467

Rose, N. (1996). The death of the social? Re-figuring the territory of government. Econ. Soc. 25, 327–356. doi: 10.1080/03085149600000018

Rose, N., and Abi-Rached, J. M. (2013). Neuro: The New Brain Sciences and the Management of the Mind. Princeton, NJ: Princeton University Press.

Rose, S. (2005). The Future of the Brain: The Promise and Perils of Tomorrow’s Neuroscience. Oxford, UK: Oxford University Press Inc.

Sennett, R. (1999). The Corrosion of Character: The Personal Consequences of Work in the New Capitalism. New York: W.W. Norton.

Shapin, S. (2010). Never Pure: Historical Studies of Science as if It Was Produced by People with Bodies, Situated in Time, Space, Culture, and Society, and Struggling for Credibility and Authority. Baltimore, MD: Johns Hopkins University Press.

Sheldon, K. M., and King, L. (2001). Why positive psychology is necessary. Am. Psychol. 56, 216–217. doi: 10.1037/0003-066x.56.3.216

Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., and Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science 303, 1157–1162. doi: 10.1126/science.1093535

Slaby, J. (2010). Steps towards a critical neuroscience. Phenomenology Cogn. Sci. 9, 397–416. doi: 10.1007/s11097-010-9170-2

Slaby, J., and Choudhury, S. (2012). “Proposal for a critical neuroscience,” in Critical Neuroscience: A Handbook of the Social and Cultural Contexts of Neuroscience , eds S. Choudhury and J. Slaby (London: Wiley-Blackwell), 27–51.

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

Wegner, D. M. (2002). The Illusion of Conscious will. Cambridge, MA: MIT Press.

Keywords: Critical Neuroscience, experiment, critique, the social, sociology of neuroscience, optimally interacting minds, interdisciplinarity

Citation: Fitzgerald D, Matusall S, Skewes J and Roepstorff A (2014) What’s so critical about Critical Neuroscience? Rethinking experiment, enacting critique. Front. Hum. Neurosci. 8 :365. doi: 10.3389/fnhum.2014.00365

Received: 15 February 2014; Accepted: 13 May 2014; Published online: 30 May 2014.

Reviewed by:

Copyright © 2014 Fitzgerald, Matusall, Skewes and Roepstorff. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Des Fitzgerald, Department of Social Science Health and Medicine, King’s College London, D11 East Wing, Strand Campus, London WC2R 2LS, UK e-mail: [email protected]

This article is part of the Research Topic

Critical Neuroscience: The context and implications of human brain research

  • News & Insights
  • All News & Insights

Landmark study shows that ‘transcendent’ thinking may grow teens’ brains over time

CANDLE scientists find that adolescents who grapple with the bigger meaning of social situations experience greater brain growth, which predicts stronger identity development and life satisfaction years later.

Adolescent transcendent thinking study

Scientists at the University of Southern California Rossier School of Education’s Center for Affective Neuroscience, Development, Learning and Education (CANDLE) , have shown for the first time that a type of thinking, that has been described for over a century as a developmental milestone of adolescence, may grow teenagers’ brains over time. This kind of thinking, which the study’s authors call “transcendent,” moves beyond reacting to the concrete specifics of social situations to also consider the broader ethical, systems-level and personal implications at play. Engaging in this type of thinking involves analyzing situations for their deeper meaning, historical contexts, civic significance, and/or underlying ideas.

The research team, led by USC Rossier Professor Mary Helen Immordino-Yang , includes Rebecca J.M. Gotlieb , research scientist at UCLA, and Xiao-Fei Yang , assistant research professor at USC Rossier. The published study “Diverse adolescents’ transcendent thinking predicts young adult psychosocial outcomes via brain network development” is published in Nature Scientific Reports .

In previous studies, the authors had shown that when teens and adults think about issues and situations in a transcendent way, many brain systems coordinate their activity, among them two major networks important for psychological functioning: the executive control network and the default mode network. The executive control network is involved in managing focused and goal-directed thinking, while the default mode network is active during all kinds of thinking that transcends the “here and now,” such as when recalling personal experiences, imagining the future, feeling enduring emotions such as compassion, gratitude and admiration for virtue, daydreaming or thinking creatively.

The researchers privately interviewed 65, 14-18-year-old high school students about true stories of other teens from around the world and asked the students to explain how each story made them feel. The students then underwent fMRI brain scans that day and again two years later. The researchers followed up with the participants twice more over the next three years, as they moved into their early twenties.

What the researchers found is that all teens in the experiment talked at least some about the bigger picture—what lessons they took from a particularly poignant story, or how a story may have changed their perspective on something in their own life or the lives and futures of others. However, they found that while all of the participating teens could think transcendently, some did it far more than others. And that was what made the difference. The more a teen grappled with the bigger picture and tried to learn from the stories, the more that teen increased the coordination between brain networks over the next two years, regardless of their IQ or their socioeconomic status. This brain growth—not how a teen’s brain compared to other teens’ brains but how a teen’s brain compared to their own brain two years earlier—in turn predicted important developmental milestones, like identity development in the late teen years and life satisfaction in young adulthood, about five years later.

The findings reveal a novel predictor of brain development—transcendent thinking. The researchers believe transcendent thinking may grow the brain because it requires coordinating brain networks involved in effortful, focused thinking, like the executive control network, with those involved in internal reflection and free-form thinking, like the default mode network. These findings “have important implications for the design of middle and high schools, and potentially also for adolescent mental health,” lead researcher Immordino-Yang says. The findings suggest “the importance of attending to adolescents’ needs to engage with complex perspectives and emotions on the social and personal relevance of issues, such as through civically minded educational approaches,'' Immordino-Yang explains. Overall, Immordino-Yang underscores “the important role teens play in their own brain development through the meaning they make of the social world.”

Mary Helen  Immordino-Yang

Mary Helen Immordino-Yang

  • Fahmy and Donna Attallah Chair in Humanistic Psychology
  • Director, USC Center for Affective Neuroscience, Development, Learning and Education (candle.usc.edu)
  • Professor of Education, Psychology & Neuroscience
  • Brain & Creativity Institute; Rossier School of Education University of Southern California Member, U.S. National Academy of Education

Xiao-Fei  Yang

Xiao-Fei Yang

  • Assistant Research Professor; Scientific Director, Center for Affective Neuroscience, Development, Learning and Education (CANDLE)

Center for Affective Neuroscience, Development, Learning and Education

Article Type

Article topics, related news & insights.

March 14, 2024

Graphic with cover of Educational Researcher on the left and a portrait of Royel Johnson on the right.

New editorial team of Educational Researcher outlines vision

In the first issue published under the incoming editors, the group states their intent to make ER a home for ‘critical examinations of race, gender, social class, sexuality and disability.’

Featured Faculty

  • Royel M. Johnson, PhD

Research Conference 2024

Conference focuses on peer collaboration and advancing equity through higher education research

At annual Research for Impact Conference, over 250 attendees heard from over 80 presenters on research projects across the USC Rossier community.

  • Pedro Noguera
  • Gale M. Sinatra
  • Adrian H. Huerta
  • Artineh Samkian

March 8, 2024

HJ Spencer Grant

Huriya Jabbar earns grant to research housing instability and educational equity policies

Jabbar’s research proposal was one of 15 selected out of more than 400 submissions.

  • Huriya Jabbar

IMAGES

  1. Read Neuroscience and Critical Thinking Online by Albert Rutherford

    neuroscience of critical thinking

  2. Neuroscience And Critical Thinking

    neuroscience of critical thinking

  3. PPT

    neuroscience of critical thinking

  4. Neuroscience and the brain

    neuroscience of critical thinking

  5. Neuroscience and Critical Thinking

    neuroscience of critical thinking

  6. Critical Thinking Skills

    neuroscience of critical thinking

VIDEO

  1. Critical Thinking

  2. Great Scientific Blunders

  3. Logic and Logical Fallacies || A Scientific Guide to Critical Thinking Skills

  4. Reality vs Illusion #brain #neuroscience #shorts

  5. Axiomatic Alignment: A critical component to Utopia and the Control Problem

  6. NEUROSCIENCE OF THINKING⁠! #brain #mind #body #science #medicine #art #love #mind

COMMENTS

  1. Neuroscience and Critical Thinking: Understand the Hidden Pathways of

    Critical thinking skills improve your decision-making muscle, speed up your deductive thinking skills, and improve your judgment. In Neuroscience and Critical Thinking, you'll find widely usable and situation-specific advice on how to view about your daily life, business, friendships, opinions, and even social media in a critical fashion.

  2. Provoking thought: A predictive processing account of critical thinking

    Introduction. In this paper, we propose that an increasingly regarded theoretical framework in neuroscience—the predictive processing framework—can help to advance an understanding of the foundations of critical thinking as well as provide a mechanistic hypothesis for how education may increase a learner's subsequent use of critical thinking outside of an educational context (viz., in ...

  3. Metacognition: ideas and insights from neuro- and educational ...

    Metacognition is defined as "thinking about thinking" or the ability to monitor and control one's cognitive processes 1 and plays an important role in learning and education 2,3,4.For ...

  4. What's so critical about Critical Neuroscience? Rethinking experiment

    Locating critique. At its heart, Critical Neuroscience is an attempt "to respond, both philosophically and scientifically, to the impressive and at times troublesome surge of the neurosciences" (Slaby and Choudhury, 2012: 29).Authors within this genre are not working to destabilize neuroscience: their more parsimonious and constructive goal is to question the broader cultural urge towards ...

  5. Empirical neuroenchantment: from reading minds to thinking critically

    Critical thinking in neuroscience will likely develop as an imperative asset to negotiate the potential pitfalls of a neuro-hype climate. Conflict of interest statement. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  6. Understanding the Complex Relationship between Critical Thinking and

    Critical thinking is understood to include both a cognitive dimension and a disposition dimension (e.g., reflective thinking) and is defined as "purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or ...

  7. Balancing Emotion and Reason to Develop Critical Thinking About

    Bioscientific advances raise numerous new ethical dilemmas. Neuroscience research opens possibilities of tracing and even modifying human brain processes, such as decision-making, revenge, or pain control. Social media and science popularization challenge the boundaries between truth, fiction, and deliberate misinformation, calling for critical thinking (CT). Biology teachers often feel ill ...

  8. Neuroscience and Critical Thinking

    In Neuroscience and Critical Thinking, you'll find widely usable and situation-specific advice on how to view about your daily life, business, friendships, opinions, and even social media in a critical fashion. Easily spot errors in reasoning. -Think slowly and deliberately before making a snap judgment or decision -Question assumptions and ...

  9. Neuroscience and Critical Thinking: Understand the Hidden Pathways of

    The pages are peppered with critical thinking practices mixed with neuroscience. This can be a tough combination to pull off, however Rutherford has done it well. Human experience is complex and many factors are elusive or difficult to measure.

  10. The Neuroscience of Conversations

    As we communicate, our brains trigger a neurochemical cocktail that makes us feel either good or bad, and we translate that inner experience into words, sentences, and stories. "Feel good ...

  11. Understanding the Complex Relationship between Critical Thinking and

    writing-to-learn approach to enhance critical thinking. Across studies, authors advocate adopting critical thinking as the course framework (Pukkila, 2004) and developing explicit examples of how critical thinking relates to the scientific method (Miri et al., 2007). In these examples, the important connection between writ-

  12. The science behind creativity

    A second route involves "System 2" processes: thinking that is slow, deliberate, and conscious. "Creativity can use one or the other or a combination of the two," he said. "You might use Type 1 thinking to generate ideas and Type 2 to critique and refine them." Which pathway a person uses might depend, in part, on their expertise.

  13. The development of the reasoning brain and how to foster logical

    Learning to reason logically is necessary for the growth of critical and scientific thinking in children. Yet, both psychological and neural evidence indicates that logical reasoning is hard even for educated adults. ... Neuroscience studies also demonstrate that reasoning with concrete information involves brain regions that qualitatively ...

  14. 5 Barriers to Critical Thinking

    Dwyer, C.P. (2017). Critical thinking: Conceptual perspectives and practical guidelines. ... Reflexive and reflective judgment processes: A social cognitive neuroscience approach. Social Judgments ...

  15. Neuroscience and Critical Thinking: Understand the Hidden Pathways of

    Critical thinking skills improve your decision-making muscle, speed up your deductive thinking skills, and improve your judgment. In Neuroscience and Critical Thinking, you'll find widely usable and situation-specific advice on how to view about your daily life, business, friendships, opinions, and even social media in a critical fashion.

  16. The Neuroscience of Learning and Development

    Are we addressing the concerns of employers who complain that graduates do not possess the creative, critical thinking, and communication skills needed in the workplace? This book harnesses what we have learned from innovations in teaching, from neuroscience, experiential learning, and studies on mindfulness and personal development to ...

  17. The Neuroscience of Critical Thinking (2018)

    Clinical Neurologist Steven Novella, from the podcast The Skeptic's Guide to the Universe, draws upon his years of experience as a neuroscientist to give us ...

  18. Neuroscience and How Students Learn

    Neuroscience and How Students Learn. This article is based on a talk by Daniela Kaufer, associate professor in the Department of Integrative Biology, for the GSI Teaching & Research Center's How Students Learn series in Spring 2011. Video and full summary of Daniela Kaufer's talk "What Can Neuroscience Research Teach Us about Teaching?".

  19. The Benefits and Drawbacks of Intuitive Thinking

    Research has shown critical thinking skills have many life benefits. For example, a study from 2017 found that people who scored higher in critical thinking skills reported fewer negative life events (for instance, getting a parking ticket or missing a flight). Critical thinking was a stronger predictor than intelligence for avoiding these ...

  20. How to apply critical thinking in learning

    What is the process of critical thinking? 1. Understand Critical thinking starts with understanding the content that you are learning. This step involves clarifying the logic and interrelations of the content by actively engaging with the materials (e.g., text, articles, and research papers).

  21. The Neuroscience of Learning and Development

    The Neuroscience of Learning and Development. Enhancing Creativity, Compassion, Critical Thinking, and Peace in Higher Education. Edited By Marilee J. Bresciani Ludvik Contributed By Ralph Wolff , Gavin W. Henning. Edition 1st Edition.

  22. NeuroLogica Blog

    In 1931 a fossil lizard was recovered from the Italian Alps, believed to be a 280 million year old specimen. The fossil was also rare in that it appeared to have some preserved soft tissue. It was given the species designation Tridentinosaurus antiquus and was thought to be part of the Protorosauria group.

  23. Emotion vs. Reason: Rethinking Decision-Making

    Researchers challenge the notion that rational thinking is the only path to good decision-making. ... emotions play a critical role in helping us commit to the choices that we make. ... hospitals and news departments around the world. Science articles cover neuroscience, psychology, AI, robotics, neurology, brain cancer, mental health, machine ...

  24. Decoding the Mind: Basic Science Revolutionizes Treatment of ...

    The Division of Neuroscience and Basic Behavioral Science (DNBBS) at the National Institute of Mental Health (NIMH) supports research on basic neuroscience, genetics, and basic behavioral science. These are foundational pillars in the quest to decode the human mind and unravel the complexities of mental illnesses. At NIMH, we are committed to supporting and conducting genomics research as a ...

  25. Frontiers

    In the midst of on-going hype about the power and potency of the new brain sciences, scholars within "Critical Neuroscience" have called for a more nuanced and sceptical neuroscientific knowledge-practice. Drawing especially on the Frankfurt School, they urge neuroscientists towards a more critical approach—one that re-inscribes the objects and practices of neuroscientific knowledge ...

  26. Landmark study shows that 'transcendent' thinking may grow teens

    Scientists at the University of Southern California Rossier School of Education's Center for Affective Neuroscience, Development, Learning and Education (CANDLE), have shown for the first time that a type of thinking, that has been described for over a century as a developmental milestone of adolescence, may grow teenagers' brains over time. This kind of thinking, which the study's ...

  27. Social Sciences

    Critical suicide studies is a relatively new area of research, practice, and activism, which we believe can offer creative new vantage points with which to 'think' suicide into the future. ... Re-thinking youth suicide: Language, culture, and power. Journal for Social Action in Counseling & Psychology 6: 122-42. [Google Scholar] White ...