Mastery-Aligned Maths Tutoring

“The best thing has been the increase in confidence and tutors being there to deal with any misunderstandings straight away."

FREE daily maths challenges

A new KS2 maths challenge every day. Perfect as lesson starters - no prep required!

FREE daily maths challenges

Fluency, Reasoning and Problem Solving: What This Looks Like In Every Maths Lesson

Neil Almond

Fluency reasoning and problem solving have been central to the new maths national curriculum for primary schools introduced in 2014. Here we look at how these three approaches or elements of maths can be interwoven in a child’s maths education through KS1 and KS2. We look at what fluency, reasoning and problem solving are, how to teach them, and how to know how a child is progressing in each – as well as what to do when they’re not, and what to avoid.

The hope is that this blog will help primary school teachers think carefully about their practice and the pedagogical choices they make around the teaching of reasoning and problem solving in particular.

Before we can think about what this would look like in practice however, we need to understand the background tothese terms.

What is fluency in maths?

Fluency in maths is a fairly broad concept. The basics of mathematical fluency – as defined by the KS1 / KS2 National Curriculum for maths – involve knowing key mathematical facts and being able to recall them quickly and accurately.

But true fluency in maths (at least up to Key Stage 2) means being able to apply the same skill to multiple contexts, and being able to choose the most appropriate method for a particular task.

Fluency in maths lessons means we teach the content using a range of representations, to ensure that all pupils understand and have sufficient time to practise what is taught.

Read more: How the best schools develop maths fluency at KS2 .

What is reasoning in maths?

Reasoning in maths is the process of applying logical thinking to a situation to derive the correct problem solving strategy for a given question, and using this method to develop and describe a solution.

Put more simply, mathematical reasoning is the bridge between fluency and problem solving. It allows pupils to use the former to accurately carry out the latter.

Read more: Developing maths reasoning at KS2: the mathematical skills required and how to teach them .

What is problem solving in maths?

It’s sometimes easier to start off with what problem solving is not. Problem solving is not necessarily just about answering word problems in maths. If a child already has a readily available method to solve this sort of problem, problem solving has not occurred. Problem solving in maths is finding a way to apply knowledge and skills you have to answer unfamiliar types of problems.

Read more: Maths problem solving: strategies and resources for primary school teachers .

We are all problem solvers

First off, problem solving should not be seen as something that some pupils can do and some cannot. Every single person is born with an innate level of problem-solving ability.

Early on as a species on this planet, we solved problems like recognising faces we know, protecting ourselves against other species, and as babies the problem of getting food (by crying relentlessly until we were fed).

All these scenarios are a form of what the evolutionary psychologist David Geary (1995) calls biologically primary knowledge. We have been solving these problems for millennia and they are so ingrained in our DNA that we learn them without any specific instruction.

image of baby crying used to illustrate ingrained problem solving skills.

Why then, if we have this innate ability, does actually teaching problem solving seem so hard?

Mathematical problem solving is a  learned skill

As you might have guessed, the domain of mathematics is far from innate. Maths doesn’t just happen to us; we need to learn it. It needs to be passed down from experts that have the knowledge to novices who do not.

This is what Geary calls biologically secondary knowledge. Solving problems (within the domain of maths) is a mixture of both primary and secondary knowledge.

The issue is that problem solving in domains that are classified as biologically secondary knowledge (like maths) can only be improved by practising elements of that domain.

So there is no generic problem-solving skill that can be taught in isolation and transferred to other areas.

This will have important ramifications for pedagogical choices, which I will go into more detail about later on in this blog.

The educationalist Dylan Wiliam had this to say on the matter: ‘for…problem solving, the idea that pupils can learn these skills in one context and apply them in another is essentially wrong.’ (Wiliam, 2018)So what is the best method of teaching problem solving to primary maths pupils?

The answer is that we teach them plenty of domain specific biological secondary knowledge – in this case maths. Our ability to successfully problem solve requires us to have a deep understanding of content and fluency of facts and mathematical procedures.

Here is what cognitive psychologist Daniel Willingham (2010) has to say:

‘Data from the last thirty years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts, and that’s true not simply because you need something to think about.

The very processes that teachers care about most—critical thinking processes such as reasoning and problem solving—are intimately intertwined with factual knowledge that is stored in long-term memory (not just found in the environment).’

Colin Foster (2019), a reader in Mathematics Education in the Mathematics Education Centre at Loughborough University, says, ‘I think of fluency and mathematical reasoning, not as ends in themselves, but as means to support pupils in the most important goal of all: solving problems.’

In that paper he produces this pyramid:

pyramid diagram showing the link between fluency, reasoning and problem solving

This is important for two reasons:

1)    It splits up reasoning skills and problem solving into two different entities

2)    It demonstrates that fluency is not something to be rushed through to get to the ‘problem solving’ stage but is rather the foundation of problem solving.

In my own work I adapt this model and turn it into a cone shape, as education seems to have a problem with pyramids and gross misinterpretation of them (think Bloom’s taxonomy).

conical diagram showing the link between fluency, reasoning skills and problem solving

Notice how we need plenty of fluency of facts, concepts, procedures and mathematical language.

Having this fluency will help with improving logical reasoning skills, which will then lend themselves to solving mathematical problems – but only if it is truly learnt and there is systematic retrieval of this information carefully planned across the curriculum.

Performance vs learning: what to avoid when teaching fluency, reasoning, and problem solving

I mean to make no sweeping generalisation here; this was my experience both at university when training and from working in schools.

At some point schools become obsessed with the ridiculous notion of ‘accelerated progress’. I have heard it used in all manner of educational contexts while training and being a teacher. ‘You will need to show ‘ accelerated progress in maths ’ in this lesson,’ ‘Ofsted will be looking for ‘accelerated progress’ etc.

I have no doubt that all of this came from a good place and from those wanting the best possible outcomes – but it is misguided.

I remember being told that we needed to get pupils onto the problem solving questions as soon as possible to demonstrate this mystical ‘accelerated progress’.

This makes sense; you have a group of pupils and you have taken them from not knowing something to working out pretty sophisticated 2-step or multi-step word problems within an hour. How is that not ‘accelerated progress?’

This was a frequent feature of my lessons up until last academic year: teach a mathematical procedure; get the pupils to do about 10 of them in their books; mark these and if the majority were correct, model some reasoning/problem solving questions from the same content as the fluency content; set the pupils some reasoning and word problem questions and that was it.

I wondered if I was the only one who had been taught this while at university so I did a quick poll on Twitter and found that was not the case.

twitter poll regarding teaching of problem solving techniques in primary school

I know these numbers won’t be big enough for a representative sample but it still shows that others are familiar with this approach.

The issue with the lesson framework I mentioned above is that it does not take into account ‘performance vs learning.’

What IS performance vs learning’?

The premise is that performance in a lesson is not a good proxy for learning.

Yes, those pupils were performing well after I had modeled a mathematical procedure for them, and managed to get questions correct.

But if problem solving depends on a deep knowledge of mathematics, this approach to lesson structure is going to be very ineffective.

As mentioned earlier, the reasoning and problem solving questions were based on the same maths content as the fluency exercises, making it more likely that pupils would solve problems correctly whether they fully understood them or not.

Chances are that all they’d need to do is find the numbers in the questions and use the same method they used in the fluency section to get their answers – not exactly high level problem solving skills.

Teaching to “cover the curriculum” hinders development of strong problem solving skills.

This is one of my worries with ‘maths mastery schemes’ that block content so that, in some circumstances, it is not looked at again until the following year (and with new objectives).

The pressure for teachers to ‘get through the curriculum’ results in many opportunities to revisit content just not happening in the classroom.

Pupils are unintentionally forced to skip ahead in the fluency, reasoning, problem solving chain without proper consolidation of the earlier processes.

As David Didau (2019) puts it, ‘When novices face a problem for which they do not have a conveniently stored solution, they have to rely on the costlier means-end analysis.

This is likely to lead to cognitive overload because it involves trying to work through and hold in mind multiple possible solutions.

It’s a bit like trying to juggle five objects at once without previous practice. Solving problems is an inefficient way to get better at problem solving.’

Third Space's Ultimate Guide to Problem Solving Techniques

Third Space's Ultimate Guide to Problem Solving Techniques

Download our free guide to problem solving techniques and get a head start on ensuring learning over performance!

Fluency and reasoning – Best practice in a lesson, a unit, and a term

By now I hope you have realised that when it comes to problem solving, fluency is king. As such we should look to mastery maths based teaching to ensure that the fluency that pupils need is there.

The answer to what fluency looks like will obviously depend on many factors, including the content being taught and the year group you find yourself teaching.

But we should not consider rushing them on to problem solving or logical reasoning in the early stages of this new content as it has not been learnt, only performed.

I would say that in the early stages of learning, content that requires the end goal of being fluent should take up the majority of lesson time – approximately 60%. The rest of the time should be spent rehearsing and retrieving other knowledge that is at risk of being forgotten about.

This blog on mental maths strategies pupils should learn in each year group is a good place to start when thinking about the core aspects of fluency that pupils should achieve.

Little and often is a good mantra when we think about fluency, particularly when revisiting the key mathematical skills of number bond fluency or multiplication fluency. So when it comes to what fluency could look like throughout the day, consider all the opportunities to get pupils practicing.

They could chant multiplications when transitioning. If a lesson in another subject has finished earlier than expected, use that time to quiz pupils on number bonds. Have fluency exercises as part of the morning work.

Read more: How to teach times tables KS1 and KS2 for total recall .

What about best practice over a longer period?

Thinking about what fluency could look like across a unit of work would again depend on the unit itself.

Look at this unit below from a popular scheme of work.

example scheme of work

They recommend 20 days to cover 9 objectives. One of these specifically mentions problem solving so I will forget about that one at the moment – so that gives 8 objectives.

I would recommend that the fluency of this unit look something like this:

LY = Last Year

example first lesson of a unit of work targeted towards fluency

This type of structure is heavily borrowed from Mark McCourt’s phased learning idea from his book ‘Teaching for Mastery.’

This should not be seen as something set in stone; it would greatly depend on the needs of the class in front of you. But it gives an idea of what fluency could look like across a unit of lessons – though not necessarily all maths lessons.

When we think about a term, we can draw on similar ideas to the one above except that your lessons could also pull on content from previous units from that term.

So lesson one may focus 60% on the new unit and 40% on what was learnt in the previous unit.

The structure could then follow a similar pattern to the one above.

Best practice for problem solving in a lesson, a unit, and a term 

When an adult first learns something new, we cannot solve a problem with it straight away. We need to become familiar with the idea and practise before we can make connections, reason and problem solve with it.

The same is true for pupils. Indeed, it could take up to two years ‘between the mathematics a student can use in imitative exercises and that they have sufficiently absorbed and connected to use autonomously in non-routine problem solving.’ (Burkhardt, 2017).

Practise with facts that are secure

So when we plan for reasoning and problem solving, we need to be looking at content from 2 years ago to base these questions on.

Now given that much of the content of the KS2 SATs will come from years 5 and 6 it can be hard to stick to this two-year idea as pupils will need to solve problems with content that can be only weeks old to them.

But certainly in other year groups, the argument could be made that content should come from previous years.

You could get pupils in Year 4 to solve complicated place value problems with the numbers they should know from Year 2 or 3. This would lessen the cognitive load, freeing up valuable working memory so they can actually focus on solving the problems using content they are familiar with.

Read more: Cognitive load theory in the classroom

Increase complexity gradually.

Once they practise solving these types of problems, they can draw on this knowledge later when solving problems with more difficult numbers.

This is what Mark McCourt calls the ‘Behave’ phase. In his book he writes:

‘Many teachers find it an uncomfortable – perhaps even illogical – process to plan the ‘Behave’ phase as one that relates to much earlier learning rather than the new idea, but it is crucial to do so if we want to bring about optimal gains in learning, understanding and long term recall.’  (Mark McCourt, 2019)

This just shows the fallacy of ‘accelerated progress’; in the space of 20 minutes some teachers are taught to move pupils from fluency through to non-routine problem solving, or we are somehow not catering to the needs of the child.

When considering what problem solving lessons could look like, here’s an example structure based on the objectives above.

example lesson of a unit using fluency and reasoning to embed problem solving

Fluency, Reasoning and Problem Solving should NOT be taught by rote 

It is important to reiterate that this is not something that should be set in stone. Key to getting the most out of this teaching for mastery approach is ensuring your pupils (across abilities) are interested and engaged in their work.

Depending on the previous attainment and abilities of the children in your class, you may find that a few have come across some of the mathematical ideas you have been teaching, and so they are able to problem solve effectively with these ideas.

Equally likely is encountering pupils on the opposite side of the spectrum, who may not have fully grasped the concept of place value and will need to go further back than 2 years and solve even simpler problems.

In order to have the greatest impact on class performance, you will have to account for these varying experiences in your lessons.

Read more: 

  • Maths Mastery Toolkit : A Practical Guide To Mastery Teaching And Learning
  • Year 6 Maths Reasoning Questions and Answers
  • Get to Grips with Maths Problem Solving KS2
  • Mixed Ability Teaching for Mastery: Classroom How To
  • 21 Maths Challenges To Really Stretch Your More Able Pupils
  • Maths Reasoning and Problem Solving CPD Powerpoint
  • Why You Should Be Incorporating Stem Sentences Into Your Primary Maths Teaching

DO YOU HAVE STUDENTS WHO NEED MORE SUPPORT IN MATHS?

Every week Third Space Learning’s specialist online maths tutors support thousands of students across hundreds of schools with weekly online 1 to 1 maths lessons designed to plug gaps and boost progress.

Since 2013 these personalised one to 1 lessons have helped over 150,000 primary and secondary students become more confident, able mathematicians.

Learn how the programmes are aligned to maths mastery teaching or request a personalised quote for your school to speak to us about your school’s needs and how we can help.

Related articles

Maths Problem Solving: Engaging Your Students And Strengthening Their Mathematical Skills

Maths Problem Solving: Engaging Your Students And Strengthening Their Mathematical Skills

Free Year 7 Maths Test With Answers And Mark Scheme: Mixed Topic Questions

Free Year 7 Maths Test With Answers And Mark Scheme: Mixed Topic Questions

What Is A Number Square? Explained For Primary School Teachers, Parents & Pupils

What Is A Number Square? Explained For Primary School Teachers, Parents & Pupils

What Is Numicon? Explained For Primary School Teachers, Parents And Pupils

What Is Numicon? Explained For Primary School Teachers, Parents And Pupils

FREE Guide to Maths Mastery

All you need to know to successfully implement a mastery approach to mathematics in your primary school, at whatever stage of your journey.

Ideal for running staff meetings on mastery or sense checking your own approach to mastery.

Privacy Overview

  • Topical and themed
  • Early years
  • Special needs
  • Schools directory
  • Resources Jobs Schools directory News Search

Fluency, reasoning and problem solving in primary maths

Primary maths, australia and new zealand, tes resources team.

Primary Maths, Maths Mastery, Fluency, Reasoning, Problem Solving, Ks1 Maths,ks2 Maths, White Rose Maths Hub, Mathematics Mastery

Develop fluency, reasoning and problem solving within any topic as part of a mastery approach

The skills of fluency, reasoning and problem solving are well-known to all primary maths teachers. In mastery teaching , they play an essential role in helping pupils to gain a deeper understanding of a topic. But what does this look like in practice?

For more information on mastery, check out this  handy introduction .

Firstly, problem solving is at the heart of mastering maths. While there is nothing new about using problem-solving questions to consolidate understanding, mastery gets teachers to rethink the traditional lengthy word-problem format. Instead, problem-solving questions are often open-ended, with more than one right answer. 

Problem solving is an important skill for all ages and abilities and, as such, needs to be taught explicitly. It is therefore useful to have challenges like these at the end of every lesson.

Secondly, verbal reasoning demonstrates that pupils understand the maths. Talk is an integral part of mastery as it encourages students to reason, justify and explain their thinking. This is tricky for many teachers who are not used to focusing on verbal reasoning in their maths lessons. You might, for example, get young learners to voice their thought processes. Older students could take part in class debates, giving them the space to challenge their peers using logical reasoning.

Introducing scaffolded sentence structures when talking about maths gives pupils the confidence to communicate their ideas clearly, before writing them down. A mastery classroom should never be a quiet classroom.

Finally, fluency, reasoning and problem solving underpins the deepening of understanding. Fluency alone doesn’t give students the chance to delve deeper into the mathematics. They may well be able to answer the questions, but can they also justify their answer or explore other possibilities?

Typically, teachers start new topics by developing fluency in order to give learners confidence with the skill. However, sometimes starting with a problem-solving question – eg, Prove that 4+3=7 – deepens understanding sooner. How? Pupils have to be reliant on resources they’ve used elsewhere, such as concrete manipulatives and pictorial representations, to help them explain the maths.

When planning, try not to get hung up on whether an activity focuses on either reasoning or problem solving as often it is a combination. Instead, turn your attention to using these types of questions to secure fluency and ensure that all children move beyond it into a world of deeper understanding.

Fluency, reasoning and problem solving in your classroom

Embedding these concepts into your everyday teaching can take time so patience is key! Mastery specialists recommend being more fluid with your planning and investing more time in making resources that will allow you to be reactionary to progress made in the lessons.

We’ve hand-picked these useful ideas to get you started:

This blog post was written with grateful thanks to Jenny Lewis, Primary Maths Specialist at the White Rose Maths Hub, and Helen Williams, Director of Primary at Mathematics Mastery, for their insights.

Want to know more about primary maths mastery? Check out our collection of free resources, quality assured by mastery experts and helpfully mapped by topic to year groups and learning objectives.

difference between reasoning and problem solving maths

Mathematical Reasoning & Problem Solving

In this lesson, we’ll discuss mathematical reasoning and methods of problem solving with an eye toward helping your students make the best use of their reasoning skills when it comes to tackling complex problems.

Previously Covered:

  • Over the course of the previous lesson, we reviewed some basics about chance and probability, as well as some basics about sampling, surveys, etc. We also covered some ideas about data sets, how they’re represented, and how to interpret the results.

Approaches to Problem Solving

When solving a mathematical problem, it is very common for a student to feel overwhelmed by the information or lack a clear idea about how to get started.

To help the students with their problem-solving “problem,” let’s look at some examples of mathematical problems and some general methods for solving problems:

Identify the following four-digit number when presented with the following information:

  • One of the four digits is a 1.
  • The digit in the hundreds place is three times the digit in the thousands place.
  • The digit in the ones place is four times the digit in the ten’s place.
  • The sum of all four digits is 13.
  • The digit 2 is in the thousands place.

Help your students identify and prioritize the information presented.

In this particular example, we want to look for concrete information. Clue #1 tells us that one digit is a 1, but we’re not sure of its location, so we see if we can find a clue with more concrete information.

We can see that clue #5 gives us that kind of information and is the only clue that does, so we start from there.

Because this clue tells us that the thousands place digit is 2, we search for clues relevant to this clue. Clue #2 tells us that the digit in the hundreds place is three times that of the thousands place digit, so it is 6.

So now we need to find the tens and ones place digits, and see that clue #3 tells us that the digit in the ones place is four times the digit in the tens place. But we remember that clue #1 tells us that there’s a one somewhere, and since one is not four times any digit, we see that the one must be in the tens place, which leads us to the conclusion that the digit in the ones place is four. So then we conclude that our number is:

If you were following closely, you would notice that clue #4 was never used. It is a nice way to check our answer, since the digits of 2614 do indeed add up to be thirteen, but we did not need this clue to solve the problem.

Recall that the clues’ relevance were identified and prioritized as follows:

  • clue #3 and clue #1

By identifying and prioritizing information, we were able to make the information given in the problem seem less overwhelming. We ordered the clues by relevance, with the most relevant clue providing us with a starting point to solve the problem. This method also utilized the more general method of breaking a problem into smaller and simpler parts to make it easier to solve.

Now let’s look at another mathematical problem and another general problem-solving method to help us solve it:

Two trees with heights of 20 m and 30 m respectively have ropes running from the top of each tree to the bottom of the other tree. The trees are 40 meters apart. We’ll assume that the ropes are pulled tight enough that we can ignore any bending or drooping. How high above the ground do the ropes intersect?

Let’s solve this problem by representing it in a visual way , in this case, a diagram:

You can see that we have a much simpler problem on our hands after drawing the diagram. A, B, C, D, E, and F are vertices of the triangles in the diagram. Now also notice that:

b = the base of triangle EFA

h = the height of triangle EFA and the height above the ground at which the ropes intersect

If we had not drawn this diagram, it would have been very hard to solve this problem, since we need the triangles and their properties to solve for h. Also, this diagram allows us to see that triangle BCA is similar to triangle EFC, and triangle DCA is similar to triangle EFA. Solving for h shows that the ropes intersect twelve meters above the ground.

Students frequently complain that mathematics is too difficult for them, because it is too abstract and unapproachable. Explaining mathematical reasoning and problem solving by using a variety of methods , such as words, numbers, symbols, charts, graphs, tables, diagrams, and concrete models can help students understand the problem better by making it more concrete and approachable.

Let’s try another one.

Given a pickle jar filled with marbles, about how many marbles does the jar contain?

Problems like this one require the student to make and use estimations . In this case, an estimation is all that is required, although, in more complex problems, estimates may help the student arrive at the final answer.

How would a student do this? A good estimation can be found by counting how many marbles are on the base of the jar and multiplying that by the number of marbles that make up the height of the marbles in the jar.

Now to make sure that we understand when and how to use these methods, let’s solve a problem on our own:

How many more faces does a cube have than a square pyramid?

Reveal Answer

The answer is B. To see how many more faces a cube has than a square pyramid, it is best to draw a diagram of a square pyramid and a cube:

From the diagrams above, we can see that the square pyramid has five faces and the cube has six. Therefore, the cube has one more face, so the answer is B.

Before we start having the same problem our model student in the beginning did—that is, being overwhelmed with too much information—let’s have a quick review of all the problem-solving methods we’ve discussed so far:

  • Sort and prioritize relevant and irrelevant information.
  • Represent a problem in different ways, such as words, symbols, concrete models, and diagrams.
  • Generate and use estimations to find solutions to mathematical problems.

Mathematical Mistakes

Along with learning methods and tools for solving mathematical problems, it is important to recognize and avoid ways to make mathematical errors. This section will review some common errors.

Circular Arguments

These involve drawing a conclusion from a premise that is itself dependent on the conclusion. In other words, you are not actually proving anything. Circular reasoning often looks like deductive reasoning, but a quick examination will reveal that it’s far from it. Consider the following argument:

  • Premise: Only an untrustworthy man would become an insurance salesman; the fact that insurance salesmen cannot be trusted is proof of this.
  • Conclusion: Therefore, insurance salesmen cannot be trusted.

While this may be a simplistic example, you can see that there’s no logical procession in a circular argument.

Assuming the Truth of the Converse

Simply put: The fact that A implies B doesn’t not necessarily mean that B implies A. For example, “All dogs are mammals; therefore, all mammals are dogs.”

Assuming the Truth of the Inverse

Watch out for this one. You cannot automatically assume the inverse of a given statement is true. Consider the following true statement:

If you grew up in Minnesota , you’ve seen snow.

Now, notice that the inverse of this statement is not necessarily true:

If you didn’t grow up in Minnesota , you’ve never seen snow.

Faulty Generalizations

This mistake (also known as inductive fallacy) can take many forms, the most common being assuming a general rule based on a specific instance: (“Bridge is a hard game; therefore, all card games are difficult.”) Be aware of more subtle forms of faulty generalizations.

Faulty Analogies

It’s a mistake to assume that because two things are alike in one respect that they are necessarily alike in other ways too. Consider the faulty analogy below:

People who absolutely have to have a cup of coffee in the morning to get going are as bad as alcoholics who can’t cope without drinking.

False (or tenuous) analogies are often used in persuasive arguments.

Now that we’ve gone over some common mathematical mistakes, let’s look at some correct and effective ways to use mathematical reasoning.

Let’s look at basic logic, its operations, some fundamental laws, and the rules of logic that help us prove statements and deduce the truth. First off, there are two different styles of proofs: direct and indirect .

Whether it’s a direct or indirect proof, the engine that drives the proof is the if-then structure of a logical statement. In formal logic, you’ll see the format using the letters p and q, representing statements, as in:

If p, then q

An arrow is used to indicate that q is derived from p, like this:

This would be the general form of many types of logical statements that would be similar to: “if Joe has 5 cents, then Joe has a nickel or Joe has 5 pennies “. Basically, a proof is a flow of implications starting with the statement p and ending with the statement q. The stepping stones we use to link these statements in a logical proof on the way are called axioms or postulates , which are accepted logical tools.

A direct proof will attempt to lay out the shortest number of steps between p and q.

The goal of an indirect proof is exactly the same—it wants to show that q follows from p; however, it goes about it in a different manner. An indirect proof also goes by the names “proof by contradiction” or reductio ad absurdum . This type of proof assumes that the opposite of what you want to prove is true, and then shows that this is untenable or absurd, so, in fact, your original statement must be true.

Let’s see how this works using the isosceles triangle below. The indirect proof assumption is in bold.

Given: Triangle ABC is isosceles with B marking the vertex

Prove: Angles A and C are congruent.

Now, let’s work through this, matching our statements with our reasons.

  • Triangle ABC is isosceles . . . . . . . . . . . . Given
  • Angle A is the vertex . . . . . . . . . . . . . . . . Given
  • Angles A and C are not congruent . . Indirect proof assumption
  • Line AB is equal to line BC . . . . . . . . . . . Legs of an isosceles triangle are congruent
  • Angles A and C are congruent . . . . . . . . The angles opposite congruent sides of a triangle are congruent
  • Contradiction . . . . . . . . . . . . . . . . . . . . . . Angles can’t be congruent and incongruent
  • Angles A and C are indeed congruent . . . The indirect proof assumption (step 3) is wrong
  • Therefore, if angles A and C are not incongruent, they are congruent.

“Always, Sometimes, and Never”

Some math problems work on the mechanics that statements are “always”, “sometimes” and “never” true.

Example: x < x 2 for all real numbers x

We may be tempted to say that this statement is “always” true, because by choosing different values of x, like -2 and 3, we see that:

Example: For all primes x ≥ 3, x is odd.

This statement is “always” true. The only prime that is not odd is two. If we had a prime x ≥ 3 that is not odd, it would be divisible by two, which would make x not prime.

  • Know and be able to identify common mathematical errors, such as circular arguments, assuming the truth of the converse, assuming the truth of the inverse, making faulty generalizations, and faulty use of analogical reasoning.
  • Be familiar with direct proofs and indirect proofs (proof by contradiction).
  • Be able to work with problems to identify “always,” “sometimes,” and “never” statements.

Oxford Education Blog

The latest news and views on education from oxford university press., the role of reasoning in supporting problem solving and fluency.

difference between reasoning and problem solving maths

A recent webinar with Mike Askew explored the connection between reasoning, problem solving and fluency. This blog post summaries the key takeaways from this webinar.

Using reasoning to support fluency and problem solving 

You’ll probably be very familiar with the aims of the National Curriculum for mathematics in England: fluency, problem-solving and reasoning. An accepted logic of progression for these is for children to become fluent in the basics, apply this to problem-solving, and then reason about what they have done. However, this sequence tends towards treating reasoning as the icing on the cake, suggesting that it might be a final step that not all children in the class will reach. So let’s turn this logic on its head and consider the possibility that much mathematical reasoning is in actual fact independent of arithmetical fluency.

What does progress in mathematical reasoning look like?

Since we cannot actually ‘see’ children’s progression in learning, in the way we can see a journey’s progression on a SatNav, we often use metaphors to talk about progression in learning. One popular metaphor is to liken learning to ‘being on track’, with the implication that we can check if children going in the right direction, reaching ‘stations’ of fluency along the way. Or we talk about progression in learning as though it were similar to building up blocks, where some ideas provide the ‘foundations’ that can be ‘built upon’. 

Instead of thinking about reasoning as a series of stations along a train track or a pile of building blocks, we can instead take a gardening metaphor, and think about reasoning as an ‘unfolding’ of things. With this metaphor, just as the sunflower ‘emerges’ from the seed, so our mathematical reasoning is contained within our early experiences. A five-year-old may not be able to solve 3 divided by 4, but they will be able to share three chocolate bars between four friends – that early experience of ‘sharing chocolate’ contains the seeds of formal division leading to fractions. 1  

Of course, the five-year-old is not interested in how much chocolate each friend gets, but whether everyone gets the same amount – it’s the child’s interest in relationships between quantities, rather than the actual quantities that holds the seeds of thinking mathematically.  

The role of relationships in thinking mathematically

Quantitative relationships.

Quantitative relationships refer to how quantities relate to each other. Consider this example:

I have some friends round on Saturday for a tea party and buy a packet of biscuits, which we share equally. On Sunday, I have another tea party, we share a second, equivalent packet of the biscuits. We share out the same number of biscuits as yesterday, but there are more people at the table. Does each person get more or less biscuits? 2

Once people are reassured that this is not a trick question 3 then it is clear that if there are more people and the same quantity of biscuits, everyone must get a smaller amount to eat on Sunday than the Saturday crowd did. Note, importantly, we can reason this conclusion without knowing exact quantities, either of people or biscuits. 

This example had the change from Saturday to Sunday being that the number of biscuits stayed the same, while the number of people went up. As each of these quantities can do three things between Saturday and Sunday – go down, stay the same, go up – there are nine variations to the problem, summarised in this table, with the solution shown to the particular version above. 

difference between reasoning and problem solving maths

Before reading on, you might like to take a moment to think about which of the other cells in the table can be filled in. (The solution is at the end of this blog).

It turns out that in 7 out of 9 cases, we can reason what will happen without doing any arithmetic. 4 We can then use this reasoning to help us understand what happens when we do put numbers in. For example, what we essentially have here is a division – quantity of biscuits divided between number of friends – and we can record the changes in the quantities of biscuits and/or people as fractions:

difference between reasoning and problem solving maths

So, the two fractions represent 5 biscuits shared between 6 friends (5/6) and 5 biscuits shared between 8 (5/8). To reason through which of these fractions is bigger we can apply our quantitative reasoning here to see that everyone must get fewer biscuits on Sunday – there are more friends, but the same quantity of biscuits to go around. We do not need to generate images of each fraction to ‘see’ which is larger, and we certainly do not need to put them both over a common denominator of 48.  We can reason about these fractions, not as being static parts of an object, but as a result of a familiar action on the world and in doing so developing our understanding of fractions. This is exactly what MathsBeat does, using this idea of reasoning in context to help children understand what the abstract mathematics might look like.

Structural relationships : 

By   structural relationships,   I mean   how we can break up and deal with a quantity in structural ways. Try this:

Jot down a two-digit number (say, 32) Add the two digits (3 + 2 = 5) Subtract that sum from your original number (32 – 5 = 27) Do at least three more Do you notice anything about your answers?

If you’ve done this, then you’ll probably notice that all of your answers are multiples of nine (and, if like most folks, you just read on, then do check this is the case with a couple of numbers now).

This result might look like a bit of mathematical magic, but there must be a reason.

We might model this using three base tens, and two units, decomposing one of our tens into units in order to take away five units. But this probably gives us no sense of the underlying structure or any physical sensation of why we always end up with a multiple of nine.

difference between reasoning and problem solving maths

If we approach this differently, thinking about where our five came from –three tens and two units – rather than decomposing one of the tens into units, we could start by taking away two, which cancels out.

And then rather than subtracting three from one of our tens, we could take away one from each ten, leaving us with three nines. And a moment’s reflection may reveal that this will work for any starting number: 45 – (4 + 5), well the, five within the nine being subtracted clears the five ones in 45, and the 4 matches the number of tens, and that will always be the case. Through the concrete, we begin to get the sense that this will always be true.

difference between reasoning and problem solving maths

If we take this into more formal recording, we are ensuring that children have a real sense of what the structure is: a  structural sense , which complements their number sense. 

Decomposing and recomposing is one way of doing subtraction, but we’re going beyond this by really unpacking and laying bare the underlying structure: a really powerful way of helping children understand what’s going on.

So in summary, much mathematical reasoning is independent of arithmetical fluency.

This is a bold statement, but as you can see from the examples above, our reasoning doesn’t necessarily depend upon or change with different numbers. In fact, it stays exactly the same. We can even say something is true and have absolutely no idea how to do the calculation. (Is it true that 37.5 x 13.57 = 40 x 13.57 – 2.5 x 13.37?)

Maybe it’s time to reverse the logic and start to think about mathematics emerging from reasoning to problem-solving to fluency.

Head shot of the blog's author Mike Askew

Mike Askew:  Before moving into teacher education, Professor Mike Askew began his career as a primary school teacher. He now researches, speaks and writes on teaching and learning mathematics. Mike believes that all children can find mathematical activity engaging and enjoyable, and therefore develop the confidence in their ability to do maths. 

Mike is also the Series Editor of  MathsBeat , a new digitally-led maths mastery programme that has been designed and written to bring a consistent and coherent approach to the National Curriculum, covering all of the aims – fluency, problem solving and reasoning – thoroughly and comprehensively. MathsBeat’s clear progression and easy-to-follow sequence of tasks develops children’s knowledge, fluency and understanding with suggested prompts, actions and questions to give all children opportunities for deep learning. Find out more here .

You can watch Mike’s full webinar,  The role of reasoning in supporting problem solving and fluency , here . (Note: you will be taken to a sign-up page and asked to enter your details; this is so that we can email you a CPD certificate on competition of the webinar). 

Solution to  Changes from Saturday to Sunday and the result

difference between reasoning and problem solving maths

 1 If you would like to read more about this, I recommend Lakoff, G., & Núñez, R. E. (2000). Where mathematics comes from: How the embodied mind brings mathematics into being. Basic Books.

2 Adapted from a problem in: Lamon, S. (2005). Teaching Fractions and Ratios for Understanding. Essential Content Knowledge and Instructional Strategies for Teachers, 2nd Edition. Routledge.

3 Because, of course in this mathematical world of friends, no one is on a diet or gluten intolerant!

4 The more/more and less/less solutions are determined by the actual quantities: biscuits going up by, say, 20 , but only one more friend turning up on Sunday is going to be very different by only having 1 more biscuit on Sunday but 20 more friends arriving. 

Share this:

One thought on “ the role of reasoning in supporting problem solving and fluency ”.

' src=

Hi Mike, I enjoyed reading your post, it has definitely given me a lot of insight into teaching and learning about mathematics, as I have struggled to understand generalisations and concepts when dealing solely with numbers, as a mathematics learner. I agree with you in that students’ ability to reason and develop an understanding of mathematical concepts, and retain a focus on mathematical ideas and why these ideas are important, especially when real-world connections are made, because this is relevant to students’ daily lives and it is something they are able to better understand rather than being presented with solely arithmetic problems and not being exposed to understanding the mathematics behind it. Henceforth, the ideas you have presented are ones I will take on when teaching: ensuring that students understand the importance of understanding mathematical ideas and use this to justify their responses, which I believe will help students develop confidence and strengthen their skills and ability to extend their thinking when learning about mathematics.

Comments are closed.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Mathematics LibreTexts

2.10: Problem Solving and Decision Making

  • Last updated
  • Save as PDF
  • Page ID 41786

Learning Objectives

  • Learn to understand the problem.
  • Learn to combine creative thinking and critical thinking to solve problems.
  • Practice problem solving in a group.

Much of your college and professional life will be spent solving problems; some will be complex, such as deciding on a career, and require time and effort to come up with a solution. Others will be small, such as deciding what to eat for lunch, and will allow you to make a quick decision based entirely on your own experience. But, in either case, when coming up with the solution and deciding what to do, follow the same basic steps.

  • Define the problem. Use your analytical skills. What is the real issue? Why is it a problem? What are the root causes? What kinds of outcomes or actions do you expect to generate to solve the problem? What are some of the key characteristics that will make a good choice: Timing? Resources? Availability of tools and materials? For more complex problems, it helps to actually write out the problem and the answers to these questions. Can you clarify your understanding of the problem by using metaphors to illustrate the issue?
  • Narrow the problem. Many problems are made up of a series of smaller problems, each requiring its own solution. Can you break the problem into different facets? What aspects of the current issue are “noise” that should not be considered in the problem solution? (Use critical thinking to separate facts from opinion in this step.)
  • Generate possible solutions. List all your options. Use your creative thinking skills in this phase. Did you come up with the second “right” answer, and the third or the fourth? Can any of these answers be combined into a stronger solution? What past or existing solutions can be adapted or combined to solve this problem?

Group Think: Effective Brainstorming

Brainstorming is a process of generating ideas for solutions in a group. This method is very effective because ideas from one person will trigger additional ideas from another. The following guidelines make for an effective brainstorming session:

  • Decide who should moderate the session. That person may participate, but his main role is to keep the discussion flowing.
  • Define the problem to be discussed and the time you will allow to consider it.
  • Write all ideas down on a board or flip chart for all participants to see.
  • Encourage everyone to speak.
  • Do not allow criticism of ideas. All ideas are good during a brainstorm. Suspend disbelief until after the session. Remember a wildly impossible idea may trigger a creative and feasible solution to a problem.
  • Choose the best solution. Use your critical thinking skills to select the most likely choices. List the pros and cons for each of your selections. How do these lists compare with the requirements you identified when you defined the problem? If you still can’t decide between options, you may want to seek further input from your brainstorming team.

Decisions, Decisions

You will be called on to make many decisions in your life. Some will be personal, like what to major in, or whether or not to get married. Other times you will be making decisions on behalf of others at work or for a volunteer organization. Occasionally you will be asked for your opinion or experience for decisions others are making. To be effective in all of these circumstances, it is helpful to understand some principles about decision making.

First, define who is responsible for solving the problem or making the decision. In an organization, this may be someone above or below you on the organization chart but is usually the person who will be responsible for implementing the solution. Deciding on an academic major should be your decision, because you will have to follow the course of study. Deciding on the boundaries of a sales territory would most likely be the sales manager who supervises the territories, because he or she will be responsible for producing the results with the combined territories. Once you define who is responsible for making the decision, everyone else will fall into one of two roles: giving input, or in rare cases, approving the decision.

Understanding the role of input is very important for good decisions. Input is sought or given due to experience or expertise, but it is up to the decision maker to weigh the input and decide whether and how to use it. Input should be fact based, or if offering an opinion, it should be clearly stated as such. Finally, once input is given, the person giving the input must support the other’s decision, whether or not the input is actually used.

Consider a team working on a project for a science course. The team assigns you the responsibility of analyzing and presenting a large set of complex data. Others on the team will set up the experiment to demonstrate the hypothesis, prepare the class presentation, and write the paper summarizing the results. As you face the data, you go to the team to seek input about the level of detail on the data you should consider for your analysis. The person doing the experiment setup thinks you should be very detailed, because then it will be easy to compare experiment results with the data. However, the person preparing the class presentation wants only high-level data to be considered because that will make for a clearer presentation. If there is not a clear understanding of the decision-making process, each of you may think the decision is yours to make because it influences the output of your work; there will be conflict and frustration on the team. If the decision maker is clearly defined upfront, however, and the input is thoughtfully given and considered, a good decision can be made (perhaps a creative compromise?) and the team can get behind the decision and work together to complete the project.

Finally, there is the approval role in decisions. This is very common in business decisions but often occurs in college work as well (the professor needs to approve the theme of the team project, for example). Approval decisions are usually based on availability of resources, legality, history, or policy.

Key Takeaways

  • Effective problem solving involves critical and creative thinking.

The four steps to effective problem solving are the following:

  • Define the problem
  • Narrow the problem
  • Generate solutions
  • Choose the solution
  • Brainstorming is a good method for generating creative solutions.
  • Understanding the difference between the roles of deciding and providing input makes for better decisions.

Checkpoint Exercises

Gather a group of three or four friends and conduct three short brainstorming sessions (ten minutes each) to generate ideas for alternate uses for peanut butter, paper clips, and pen caps. Compare the results of the group with your own ideas. Be sure to follow the brainstorming guidelines. Did you generate more ideas in the group? Did the quality of the ideas improve? Were the group ideas more innovative? Which was more fun? Write your conclusions here.

__________________________________________________________________

Using the steps outlined earlier for problem solving, write a plan for the following problem: You are in your second year of studies in computer animation at Jefferson Community College. You and your wife both work, and you would like to start a family in the next year or two. You want to become a video game designer and can benefit from more advanced work in programming. Should you go on to complete a four-year degree?

Define the problem: What is the core issue? What are the related issues? Are there any requirements to a successful solution? Can you come up with a metaphor to describe the issue?

Narrow the problem: Can you break down the problem into smaller manageable pieces? What would they be?

Generate solutions: What are at least two “right” answers to each of the problem pieces?

Choose the right approach: What do you already know about each solution? What do you still need to know? How can you get the information you need? Make a list of pros and cons for each solution.

Building fluency through problem solving

an orange square, a blue square, and a green square with a multiplication symbol, an addition symbol, and a division symbol inside respectively

Editor’s Note:

This is an updated version of a blog post published on January 13, 2020.

Problem solving builds fluency and fluency builds problem solving. How can you help learners make the most of this virtuous cycle and achieve mastery?

Fluency. It’s so important that I have written not one , not two , but three blog posts on the subject. It’s also one of the three key aims for the national curriculum.

It’s a common dilemma. Learners need opportunities to apply their knowledge in solving problems and reasoning (the other two NC aims), but can’t reason or solve problems until they’ve achieved a certain level of fluency.

Instead of seeing this as a catch-22, think of fluency and problem solving as a virtuous cycle — working together to help learners achieve true mastery.

Supporting fluency when solving problems

Fluency helps children spot patterns, make conjectures, test them out, create generalisations, and make connections between different areas of their learning — the true skills of working mathematically. When learners can work mathematically, they’re better equipped to solve problems.

But what if learners are not totally fluent? Can they still solve problems? With the right support, problem solving helps learners develop their fluency, which makes them better at problem solving, which develops fluency…

Here are ways you can support your learners’ fluency journey.

Don’t worry about rapid recall

What does it mean to be fluent? Fluency means that learners are able to recall and use facts in a way that is accurate, efficient, reliable, flexible and fluid. But that doesn’t mean that good mathematicians need to have super-speedy recall of facts either.

Putting pressure on learners to recall facts in timed tests can negatively affect their ability to solve problems. Research shows that for about one-third of students, the onset of timed testing is the beginning of maths anxiety . Not only is maths anxiety upsetting for learners, it robs them of working memory and makes maths even harder.

Just because it takes a learner a little longer to recall or work out a fact, doesn’t mean the way they’re working isn’t becoming accurate, efficient, reliable, flexible and fluid. Fluent doesn’t always mean fast, and every time a learner gets to the answer (even if it takes a while), they embed the learning a little more.

Give learners time to think and reason

Psychologist Daniel Willingham describes memory as “the residue of thought”. If you want your learners to become fluent, you need to give them opportunities to think and reason. You can do this by looking for ways to extend problems so that learners have more to think about.

Here’s an example: what is 6 × 7 ? You could ask your learners for the answer and move on, but why stop there? If learners know that 6 × 7 = 42 , how many other related facts can they work out from this? Or if they don’t know 6 × 7 , ask them to work it out using facts they do know, like (5 × 7) + (1 × 7) , or (6 × 6) + (1 × 6) ?

Spending time exploring problems helps learners to build fluency in number sense, recognise patterns and see connections, and visualise — the three key components of problem solving.

Developing problem solving when building fluency

Learners with strong problem-solving skills can move flexibly between different representations, recognising and showing the links between them. They identify the merits of different strategies, and choose from a range of different approaches to find the one most appropriate for the maths problem at hand.

So, what type of problems should you give learners when they are still building their fluency? The best problem-solving questions exist in a Goldilocks Zone; the problems are hard enough to make learners think, but not so hard that they fail to learn anything.

Here’s how to give them opportunities to develop problem solving.

Centre problems around familiar topics

Learners can develop their problem-solving skills if they’re actively taught them and are given opportunities to put them into practice. When our aim is to develop problem-solving skills, it’s important that the mathematical content isn’t too challenging.

Asking learners to activate their problem-solving skills while applying new learning makes the level of difficulty too high. Keep problems centred around familiar topics (this can even be content taught as long ago as two years previously).

Not only does choosing familiar topics help learners practice their problem-solving skills, revisiting topics will also improve their fluency.

Keep the focus on problem solving, not calculation

What do you want learners to notice when solving a problem? If the focus is developing problem-solving skills, then the takeaway should be the method used to answer the question.

If the numbers involved in a problem are ‘nasty’, learners might spend their limited working memory on calculating and lose sight of the problem. Chances are they’ll have issues recalling the way they solved the problem. On top of that, they’ll learn nothing about problem-solving strategies.

It’s important to make sure that learners have a fluent recall of the facts needed to solve the problem. This way, they can focus on actually solving it rather than struggling to recall facts. To understand the underlying problem-solving strategies, learners need to have the processing capacity to spot patterns and make connections.

The ultimate goal of teaching mathematics is to create thinkers. Making the most of the fluency virtuous cycle helps learners to do so much more than just recall facts and memorise procedures. In time, your learners will be able to work fluently, make connections, solve problems, and become true mathematical thinkers.

Jo Boaler (2014). Research Suggests that Timed Tests Cause Math Anxiety. Teaching Children Mathematics , 20(8), p.469.

Willingham, D. (2009). Why don’t students like school?: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for Your Classroom. San Francisco: Jossey-Bass.

Gill Knight

Browse by Topic

Your teaching practice.

Boost your teaching confidence with the latest musings on pedagogy, classroom management, and teacher mental health.

Maths Mastery Stories

You’re part of a growing community. Get smart implementation advice and hear inspiring maths mastery stories from teachers just like you.

Teaching Tips

Learn practical maths teaching tips and strategies you can use in your classroom right away — from teachers who’ve been there.

Classroom Assessment

Identify where your learners are at and where to take them next with expert assessment advice from seasoned educators.

Your Learners

Help every learner succeed with strategies for managing behaviour, supporting mental health, and differentiating instruction for all attainment levels.

Teaching Maths for Mastery

Interested in Singapore maths, the CPA approach, bar modelling, or number bonds? Learn essential maths mastery theory and techniques here.

Deepen your mastery knowledge with our biweekly newsletter

Image of Cookies

By clicking “Accept All” , you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Gaining Mathematical Understanding: The Effects of Creative Mathematical Reasoning and Cognitive Proficiency

Bert jonsson.

1 Department of Applied Educational Science, Umeå University, Umeå, Sweden

2 Umeå Mathematics Education Research Center, Umeå, Sweden

Carina Granberg

Johan lithner.

3 Department of Science and Mathematics Education, Umeå University, Umeå, Sweden

Associated Data

The datasets are available on request to the corresponding author.

In the field of mathematics education, one of the main questions remaining under debate is whether students’ development of mathematical reasoning and problem-solving is aided more by solving tasks with given instructions or by solving them without instructions. It has been argued, that providing little or no instruction for a mathematical task generates a mathematical struggle, which can facilitate learning. This view in contrast, tasks in which routine procedures can be applied can lead to mechanical repetition with little or no conceptual understanding. This study contrasts Creative Mathematical Reasoning (CMR), in which students must construct the mathematical method, with Algorithmic Reasoning (AR), in which predetermined methods and procedures on how to solve the task are given. Moreover, measures of fluid intelligence and working memory capacity are included in the analyses alongside the students’ math tracks. The results show that practicing with CMR tasks was superior to practicing with AR tasks in terms of students’ performance on practiced test tasks and transfer test tasks . Cognitive proficiency was shown to have an effect on students’ learning for both CMR and AR learning conditions. However, math tracks (advanced versus a more basic level) showed no significant effect. It is argued that going beyond step-by-step textbook solutions is essential and that students need to be presented with mathematical activities involving a struggle. In the CMR approach, students must focus on the relevant information in order to solve the task, and the characteristics of CMR tasks can guide students to the structural features that are critical for aiding comprehension.

Introduction

Supporting students’ mathematical reasoning and problem-solving has been pointed out as important by the National Council of Teachers of Mathematics (NCTM; 26T 1 ). This philosophy is reflected in the wide range of mathematics education research focusing on the impact different teaching designs might have on students’ reasoning, problem-solving ability, and conceptual understanding (e.g., Coles and Brown, 2016 ; Lithner, 2017 ). One of the recurrent questions in this field is whether students learn more by solving tasks with given instructions or without them: “The contrast between the two positions is best understood as a continuum, and both ends appear to have their own strengths and weaknesses” ( Lee and Anderson, 2013 , p. 446).

It has been argued that providing students with instructions for solving tasks lowers the cognitive demand and frees up resources that students can use to develop a conceptual understanding (e.g., worked example design; Sweller et al., 2011 ). In contrast, other approaches argue that students should not be given instructions for solving tasks; one example is Kapur (2008 , 2010) suggestion of “ill-structured” task design. With respect to the latter approach, Hiebert and Grouws (2007) and Niss (2007) emphasize that providing students with little or no instruction generates a struggle (in a positive sense) with important mathematics, which in turn facilitates learning. According to Hiebert (2003) and Lithner (2008 , 2017) , one of the most challenging aspects of mathematical education is that the teaching models used in schools are commonly based on mechanical repetition, following step-by-step methods, and using predefined algorithms—methods that are commonly viewed as rote learning. Rote learning (i.e., learning facts and procedures) can be positive, as it can reduce the load on the working memory and free up cognitive resources, which can be used for more cognitively demanding activities ( Wirebring et al., 2015 ). A typical example of rote learning is knowledge of the multiplication table, which involves the ability to immediately retrieve “7 × 9 = 63” from the long-term memory; this is much less cognitively demanding than calculating 7 + 7 + 7 + 7 + 7 + 7 + 7 + 7 + 7. However, if teaching and/or learning strategies are solely based on rote learning, students will be prevented from developing their ability to struggle with important mathematics, forming an interest in such struggles, gaining conceptual understanding, and finding their own solution methods.

Indeed, several studies have shown that students are mainly given tasks that promote the use of predetermined algorithms, procedures, and/or examples of how to solve the task rather than opportunities to engage in a problem-solving struggle without instruction ( Stacey and Vincent, 2009 ; Denisse et al., 2012 ; Boesen et al., 2014 ; Jäder et al., 2019 ). For example, Jäder et al. (2019) examined mathematics textbooks from 12 countries and found that 79% of the textbook tasks could be solved by merely following provided procedures, 13% could be solved by minor adjustments of the procedure, and only 9% required students to create (parts of) their own methods (for similar findings, also see Pointon and Sangwin, 2003 ; Bergqvist, 2007 ; Mac an Bhaird et al., 2017 ). In response to these findings, Lithner (2008 , 2017) developed a framework arguing that the use of instructions in terms of predefined algorithms has negative long-term consequences for the development of students’ conceptual understanding. To develop their conceptual understanding, students must instead engage in creating (parts of) the methods by themselves. This framework, which addresses algorithmic and creative reasoning, guides the present study.

Research Framework: Algorithmic and Creative Mathematical Reasoning

In the Lithner (2008) framework, task design, students’ reasoning, and students’ learning opportunities are related. When students solve tasks using provided methods/algorithms, their reasoning is likely to become imitative (i.e., using the provided method/algorithm without any reflection). Lithner (2008) defines this kind of reasoning as Algorithmic Reasoning (AR), and argues that AR is likely to lead to rote learning. In contrast, when students solve tasks without a provided method or algorithm, they are “forced” to struggle, and their reasoning needs to be—and will become—more creative. Lithner denotes this way of reasoning as Creative Mathematical Reasoning (CMR) and suggests that CMR is beneficial for the development of conceptual understanding. It is important to note that creativity in this context is neither “genius” nor “exceptional novelty;” rather, creativity is defined as “the creation of mathematical task solutions that are original to the individual who creates them, though the solutions can be modest” ( Jonsson et al., 2014 , p. 22; see also Silver, 1997 ; Lithner, 2008 ; for similar reasoning). Lithner (2008) argues that the reasoning inherent in CMR must fulfill three criteria: (i) creativity , as the learner creates a previously unexperienced reasoning sequence or recreates a forgotten one; (ii) plausibility , as there are predictive arguments supporting strategy choice and verification arguments explaining why the strategy implementation and conclusions are true or plausible; and (iii) anchoring , as the learner’s arguments are anchored in the intrinsic mathematical properties of the reasoning components.

Previous studies have shown that students practicing with CMR outperform students practicing with AR on test tasks ( Jonsson et al., 2014 ; Jonsson et al., 2016 ; Norqvist, 2017 ; Norqvist et al., 2019 ). Jonsson et al. (2016) investigated whether the effects of effortful struggle or overlapping processes based on task similarity (denoted as transfer appropriate processing, or TAP; Franks et al., 2000 ) underlie the effects of using CMR and AR. The results did reveal effects of TAP for both CMR and AR tasks, with an average effect size (Cohens d ; Cohen, 1992 ) of d = 0.27. While for effortful struggle, which characterizes CMR, the average effect size was d = 1.34. It was concluded that effortful struggle is a more likely explanation for the positive effects of using CMR than TAP.

In sum, the use of instructions in terms of predefined algorithms (AR) is argued to have negative long-term consequences on students’ development of conceptual understanding and to deteriorate students’ interest in struggling with important mathematics (e.g., Jäder et al., 2019 ). In contrast, the CMR approach requires students to engage in a effortful and productive struggle when performing CMR (e.g., Lithner, 2017 ). However, since the students that participated in previous studies were only given practiced test tasks (albeit with different numbers), the results may “merely” reflect memory consolidation without a corresponding conceptual understanding. If, after practice, students can apply their acquired reasoning to tasks not previously practiced, this would indicate a conceptual understanding.

In the present study, we investigate the effects of using AR and CMR tasks during practice, on subsequent test tasks, including both practiced test tasks and transfer test tasks . We are familiar with the large amount of transfer research in the literature and are aware that a distinction has been made between near transfer and far transfer tasks (e.g., Barnett and Ceci, 2002 ; Butler et al., 2017 ). In the present study, no attempt to distinguish between transfer and near transfer is made, we define transfer tasks as tasks that require a new reasoning sequence in order to be solved (see Mac an Bhaird et al., 2017 for a similar argument). These tasks are further described in the Methods section in conjunction with examples of tasks.

Mathematics and Individual Differences in Cognition

Domain-general abilities, such as general intelligence, influence learning across many academic domains, with mathematics being no exception ( Carroll, 1993 ). General intelligence, which is commonly denoted as the ability to think logically and systematically, was explored in a prospective study of 70,000 students. Overall, it was found that general intelligence could explain 58.6% of the variation in performance on national tests at 16 years of age ( Deary et al., 2007 ). Others have found slightly lower correlations. In a survey by Mackintosh and Mackintosh (2011) , the correlations between intelligence quotient (IQ) scores and school grades were between 0.4 and 0.7. Fluid intelligence is both part of and closely related to general intelligence ( Primi et al., 2010 ), and is recognized as a causal factor in an individual’s response when encountering new situations ( Watkins et al., 2007 ; Valentin Kvist and Gustafsson, 2008 ) and solving mathematical tasks ( Floyd et al., 2003 ; Taub et al., 2008 ). Moreover, there is a high degree of similarity between the mathematics problems used in schools and those commonly administered during intelligence tests that measure fluid cognitive skills ( Blair et al., 2005 ).

Solving arithmetic task places demands on our working memory because of the multiple steps that often characterize math. When doing math, we use our working memory to retrieve the information needed to solve the math task, keep relevant information about the problem salient, and inhibit irrelevant information. Baddeley (2000 , 2010) multicomponent working memory model is a common model used to describe the working memory. This model consists of the phonological loop and the visuospatial sketchpad, which, respectively, handle visuospatial and phonological information. These two sub-systems are controlled by the central executive and its executive components, updating, shifting, and inhibition ( Miyake et al., 2000 ). In his model, Baddeley (2000) added the episodic buffer, which is alleged to be responsible for the temporary storage of information from the two sub-systems and the long-term memory. Individual differences in the performance of complex working memory tasks, which are commonly defined as measures of the working memory capacity (WMC), arise from differences in an individual’s cognitive ability to actively store, actively process, and selectively consider the information required to produce an output in a setting with potentially interfering distractions ( Shah and Miyake, 1996 ; Wiklund-Hörnqvist et al., 2016 ).

There is a wealth of evidence and a general consensus in the field that working memory directly influences math performance ( Passolunghi et al., 2008 ; De Smedt et al., 2009 ; Raghubar et al., 2010 ; Passolunghi and Costa, 2019 ). In addition, many studies have shown that children with low WMC have more difficulty doing math ( Adam and Hitch, 1997 ; McLean and Hitch, 1999 ; Andersson and Lyxell, 2007 ; Szücs et al., 2014 ). Moreover, children with low WMC are overrepresented among students with various other problems, including problems with reading and writing ( Adam and Hitch, 1997 ; Gathercole et al., 2003 ; Alloway, 2009 ). Raghubar et al. (2010) concluded that “Research on working memory and math across experimental, disability, and cross-sectional and longitudinal developmental studies reveal that working memory is indeed related to mathematical performance in adults and in typically developing children and in children with difficulties in math” (p. 119; for similar reasoning, also see Geary et al., 2017 ).

Math Tracks

A math track is a specific series of courses students follow in their mathematics studies. Examples might include a basic or low-level math track in comparison with an advanced math track. In Sweden, there are five levels of math, each of which is subdivided into parts a--c, ranging from basic (a) to advanced (c). That is, course 1c is more advanced than course 1b, and course 1b is more advanced than course 1a. In comparison with social science students, natural science students study math on a higher level and move through the curriculum at a faster pace. At the end of year one, natural science students have gone through courses 1c and 2c, while social science students have gone through course 1b. Moreover, natural science students that are starting upper secondary school typically have higher grades from lower secondary school than social science students 2 . Therefore, in the present study, it is reasonable to assume that natural science students as a group have better, more advanced mathematical pre-knowledge than social science students.

In the present study, we acknowledge the importance of both fluid intelligence and working memory and thus include a complex working task and a general fluid intelligence task as measures of cognitive proficiency. Furthermore, based on their curriculum, the students in this study were divided according to their mathematical tracks (basic and advanced), with the aim of capturing differences in mathematical skills.

This study’s hypotheses were guided by previous theoretical arguments ( Lithner, 2008 , 2017 ) and empirical findings ( Jonsson et al., 2014 , 2016 ; Norqvist et al., 2019 ). On this basis, we hypothesized that:

  • 1. Practicing with CMR tasks would to a greater extent facilitate performance on practiced tests tasks than practicing with AR tasks.
  • 2. Practice with CMR tasks would to a greater extent facilitate performance on transfer test tasks than practice with AR tasks.
  • 3. Students that are more cognitively proficient would outperform those who are less cognitively proficient on both practiced test tasks and transfer test tasks
  • 4. Students enrolled in advanced math tracks are likely to outperform those enrolled in basic math tracks on both practiced test tasks and transfer test tasks .

Rationales for the Experiments

The three separate experiments presented below were conducted over a period of 2 years and encompassed 270 students. The overall aim was to contrast CMR with AR with respect to mathematical understanding. An additional aim was to contrast more cognitively proficient students with less cognitively proficient students and investigate potential interactions. The experiments progressed as a function of the experimental finding obtained in each experiment and were as such, not fully planned ahead. Experiment 1 was designed to replicate a previous study on practiced test tasks ( Jonsson et al., 2014 ), and also introduced transfer test tasks with the aim of better capturing conceptual understanding. However, when running a between-subject design, as in experiment 1, there is a risk of non-equivalent group bias when compared with using a within-subject design. It was also hypothesized that the findings (CMR > AR) could be challenged if the students were provided with an easier response mode. It was therefore decided that experiment 2 should employ a within-subject design and use multiple-choice (MC) questions as the test format. After experiment 2, it was discussed whether the eight transfer test tasks used in experiment 2 were too few to build appropriate statistics and whether the MC test format did not fully capture students’ conceptual understanding because of the possibility of students using response elimination and/or guessing. Moreover, the total number of test tasks was 32 (24 practiced test tasks and eight transfer test tasks ), and some students complained that there were too many test tasks, which may have affected their performance. It was therefore decided that experiment 3 should focus solely on transfer test tasks , thereby decreasing the total number of test tasks but increase the number of transfer test tasks without introducing fatigue. In experiment 3, we returned to short answers as a test format, thus restricting the possibility of students using response elimination and/or guessing.

Materials and Methods

Practice tasks.

A set of 35 tasks were pilot tested by 50 upper secondary school students. The aim was to establish a set of novel and challenging tasks that were not so complex that the students would have difficulty understanding what was requested. Twenty-eight of the 35 tasks fulfilled the criteria and were selected for the interventions. Each of the 28 tasks was then written as an AR task and as a CMR task, respectively ( Figures 1A,B ). The AR tasks were designed to resemble the design of everyday mathematical textbook tasks. Hence, each AR task provided the student with a method (a formula) for solving the task, an example of how to apply the formula, and a numerical test question ( Figure 1A ). The CMR tasks did not include any formulas, examples, or explanations, and the students were only asked to solve the numerical test questions ( Figure 1B ). Each of the 2 × 28 task sets (AR and CMR) included 10 subtasks, which only differed with respect to the numerical value used for the calculation. Although the number of task sets differed between the three experiments, there were 10 subtasks in each task set in all three experiments. Moreover, in each CMR task set, the third subtask asked students to construct a formula ( Figure 1C ). If the students completed all 10 subtasks, the software randomly resampled new numerical tasks until the session ended. This resampling ensured that the CMR and AR practice conditions lasted for the same length of time in all three experiments.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-11-574366-g001.jpg

(A–C) Examples of AR and CMR practice tasks and how they were presented to the students on their laptop screen. (A) AR practice task ; (B) CMR practice task ; (C) CMR task asking for the formula.

Test tasks that were the same as the practice tasks (albeit with different numbers) are denoted as “ practiced test tasks ” while the tasks that were different from the practice tasks are denoted as “ transfer test tasks .”

Practiced Test Tasks

The layout of the practiced test tasks consisting of numerical- and formula tasks and can be seen in Figures 2A,C . The similarities between practice tasks and practiced test tasks may promote overlapping processing activities ( Franks et al., 2000 ) or, according to the encoding specificity principle, provide contextual cues during practice that can aid later test performance ( Tulving and Thomson, 1973 ). Transfer test tasks were therefore developed.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-11-574366-g002.jpg

(A–D) Examples of test tasks and how they were presented to the students on their laptop screen. (A,C) Practiced test tasks and (B,D) transfer test task .

Transfer Test Tasks

The layout of the transfer test tasks consisting of numerical- and formula tasks can be seen in Figures 2B,D . The rationale underlying why transfer test tasks constitute a more valid measure of exploring students’ conceptual understanding of mathematics is that the solution algorithm (e.g., y = 3 x + 1) could have been memorized without any conceptual understanding. For a transfer test tasks the same algorithm cannot be used again, but the same general solution idea (e.g., multiplying the number of squares or rectangles with the number of matches needed for each new square/rectangle, and then adding the number of matches needed to complete the first square/rectangle) can be employed. We argue that knowing this idea of a general solution constitutes a local conceptual understanding of the task.

The Supplementary Material provides more examples of tasks.

Practice and Test Settings

In all three experiments, the practice sessions and test sessions were conducted in the students’ classroom. Both sets of tasks were presented to the students on their laptops. All tasks were solved individually; hence, no teacher or peer support was provided. The students were offered the use of a simple virtual calculator, which was displayed on their laptop screen. After submitting each answer during a practice session, the correct answer was shown to the students. However, no correct answers were provided to tasks that asked the students to construct formulas (i.e., the third CMR task). This was done to prevent students from using a provided formula instead of constructing a method/formula.

The software that was used for presenting practice and test tasks also checked and saved the answers automatically. All students received the same elements of the intervention, which due to the computer presentations, were delivered in the same manner to all the students, ensuring high fidelity ( Horner et al., 2006 ). The Supplementary Material provides additional examples and descriptions of the tasks employed in this study. The three experiments did not include a pre-test due to the risk of an interaction between the pre-test and the learning conditions, making the students more or less responsive to manipulation (for a discussion, see Pasnak, 2018 ). Moreover, the students were unfamiliar with the mathematical tasks.

Cognitive Measurement

The cognitive measures included cognitive testing of a complex working memory task (operation span; Unsworth et al., 2005 ) and general fluid intelligence (Raven’s Advanced Progressive Matrices; Raven et al., 2003 ). Raven’s APM consists of 48 items, including 12 practice items. To capture individual differences and to prevent both ceiling and floor effects, we used the 12 practice items as well as the 36 original test items. The 12 practice items were validated against Raven’s Standard Progressive Matrices ( Chiesi et al., 2012 ). These 48 test items were divided into 24 odd-numbered and 24 even-numbered items. Half of the students were randomly assigned to the odd-numbered items and half were assigned the even-numbered items. The total number of correct solutions was summed, providing a maximum score of 24. The task was self-paced over a maximum of 25 min. The countdown from 25 min was displayed in the upper-right corner of the screen. Initially, the students practiced on three items derived from Raven’s Standard Progressive Matrices. A measure of internal consistency (Cronbach’s alpha) was extracted from a larger pool of data, which encompassed the data obtained from the students in experiments 1 and 2, and was found to be 0.84.

In the operation span task students were asked to perform mathematical operations while retaining specific letters in their memory. After a sequence of mathematical operations and letters, they were asked to recall these letters in the same order as they were presented. The mathematical operations were self-paced (with an upper limit of 2.5 standard deviations above each individual average response time, extracted from an initial practice session). Each letter was presented after each mathematical operation and displayed for 800 ms. The letters to recall were presented in three sets of each set size. Every set size contained three to seven letters. The sum of all entirely recalled sets was used as the student’s WMC score. The measure of internal consistency revealed a Cronbach’s alpha of 0.83. Operation span was also self-paced, but without any time limit.

The operation span task and Raven’s matrices were combined into a composite score denoted as the cognitive proficiency (CP) index. The CP index score was based on a z -transformation of the operation span task performance and Raven’s matrices, thus forming the CP composite scores. These CP composite scores were then used to split (median split) students into lower and higher CP groups, and were used as a factor in the subsequent analyses across all three experiments. The students conducted the cognitive tests in their classrooms approximately 1 week before each of the three experiments.

Experiment 1

Participants.

A priori power analysis with effect sizes ( d = 0.73) from Jonsson et al. (2014) indicated that with an alpha of 0.05 and a statistical power of 0.80, a sample size of 61 students would obtain a statistical group difference. The students attended a large upper secondary school located in a municipality in a northern region of Sweden. Recruitment of students was conducted in class by the authors. One hundred and forty-four students were included in the experiment. Within each math track (basic, advanced) students were randomly assigned to engage in either the AR or CMR 3 groups. Out of those, 137 students (63 boys, 74 girls) with a mean age of 17.13 years ( SD = 0.62) were included and subsequently analyzed according to their natural science (advanced level), social science (basic level) math tracks and CP. All students spoke Swedish. Written informed consent was obtained from the students in accordance with the Helsinki declaration. The Regional Ethics Committee at Umeå University, Sweden, approved the study.

Cognitive Measures

The cognitive testing included measures of the working memory task (operation span; Unsworth et al., 2005 ) and general fluid intelligence (Raven’s matrices; Raven et al., 2003 ). The mean value for the operation span task was 31.52 ( SD = 16.35) and 12.63 ( SD = 5.10) for Raven’s matrices, respectively. The correlation between the operation span and the Raven’s matrices was found to be significant, r = 0.42, p < 0.001. A CP composite score was formed based on the operation span and Raven’s matrices scores, and was used to split the students into low and high CP groups; it was also used as a factor in the subsequent analyses.

From the 28 designed tasks (see above), 14 practice tasks were randomly chosen for the practice session. The corresponding 14 practiced test tasks together with seven transfer test tasks were used during the test.

In a between-group design, the students engaged in either the AR practice ( N = 72), which involved solving 14 AR task sets ( Figure 1A ), or the CMR ( N = 65) practice, which involved solving 14 CMR task sets ( Figure 1B ). The students had 4 min to conclude each of the 14 task sets.

One week later, a test was conducted in which students were asked to solve 14 practiced test tasks , formula and numerical tasks ( Figures 2A,C ) and seven transfer test tasks , formula and numerical tasks ( Figures 2B,D ). The first test task for both the practiced test tasks and the transfer test tasks was to write down the formula corresponding to the practice task with a time limit of 30 s. The second test task for both the practiced test tas ks and the transfer test tasks was comprised of solving a numerical test task. The students were given 4 min to solve each task. The practiced test tasks were always presented before the transfer test tasks .

Statistical Analysis

A 2 (CP; low, high) × 2 (group; AR, CMR) × 2 (math tracks; basic, advanced) multivariate analysis of variance (MANOVA) was followed by univariate analyses of variance (ANOVAs). The proportions of correct responses on numerical (practiced, transfer) and formula (practiced, transfer) tasks were entered as the dependent variables. Cohens d , and partial eta square (η p 2 ) were used as index of effect sizes.

Table 1A displays mean values, standard deviations, skewness, kurtosis, and Cronbach’s alpha of proportion correct responses for the test tasks for both AR and CMR learning conditions. Separate independent t -tests revealed that there were no significant differences between students in the AR and CMR learning conditions for operation span, t (135) = 0.48, p = 63, d = 0.08 and for the Raven’s matrices, t (135) = 0.12, p = 0.90, d = 0.02, respectively, showing that these groups were equal with respect to both complex working memory and fluid intelligence. Moreover, a subsequent analysis (independent t -test) of the CP composite score dividing the students into high and low CP groups showed that they could be considered to be cognitively separated, t (135) = 15.71, p < 0.001, d = 2.68.

Mean proportion correct response ( M ) and standard deviations ( SD ), skewness, kurtosis and Cronbach’s alpha for the AR and CMR learning conditions, respectively.

Table 1B display proportion correct responses for the test tasks divided according to their CP level. The statistical analyses confirmed that the students in the CMR learning condition outperformed those in the AR learning condition, F (4,126) = 4.42, p = 0.002, Wilk’s Λ = 0.40, η p 2 = 0.12. Follow-up ANOVAs for each dependent variable were significant, practiced test task formula , F (1,129) = 15.83, p < 0.001, η p 2 = 0.10; practiced test task numerical , F (1,129) = 12.35, p = 0.001, η p 2 = 0.09; transfer test task formula , F (1,129) = 8.83, p = 0.04, η p 2 = 0.06; and transfer test task numerical , F (1,129) = 5.05, p = 0.03, η p 2 = 0.04. An effect of CP was also obtained, F (4,126) = 7.71, p < 0.001, Wilk’s Λ = 0.80, η p 2 = 0.20, showing that the more cognitively proficient students outperformed those who were less proficient. Follow-up ANOVAs for each dependent variable revealed significant univariate effects of CP for the practiced test task formula , F (1,129) = 12.35, p < 0.001, η p 2 = 0.09; the practiced test task numerical , F (1,129) = 25.72, p < 0.001, η p 2 = 0.17; the transfer test task formula, F (1,129) = 22.63, p < 0.001, η p 2 = 0.15; and the transfer test task numerical , F (1,129) = 22.46, p < 0.01, η p 2 = 0.15. However, no multivariate main effects of math tracks and no multivariate interactions were obtained, with all p’s > 0.10.

Mean proportion correct response ( M ) and standard deviations ( SD ) for AR and CMR learning conditions across low and high CP groups.

With respect to all four dependent variables, the analyses showed that students practicing with CMR had superior results on the subsequent test 1 week later than students practicing with AR (confirming hypotheses 1 and 2) and that the more cognitively proficient students outperformed their less cognitively proficient counterparts, independent of group (confirming hypothesis 3). Although the natural science students performed, on average, better than social science students on all four dependent variables, no significant main effect was observed for math tracks (disconfirming hypothesis 4).

Experiment 2

The same hypotheses as in experiment 1 were posed in experiment 2. However, as pointed out above, there is a higher risk of non-equivalent group bias when using a between-subject design, and a simpler test format could challenge the differential effects found in experiment 1 (CMR > AR). It was therefore decided that experiment 2 should employ a within-subject design and use MC questions as a test format instead of short answers.

A priori power analysis based on a within-subjects pilot study ( N = 20) indicated that with an alpha of 0.05 and a statistical power of 0.80, a sample size of 50 students would obtain a statistical group difference. The students were from a larger pool of students, of which 82 students were randomly allocated to a functional Magnetic Resonance Imaging (fMRI) study, and the remaining 51 students participated in experiment 2. An independent t -test revealed no differences concerning age, general fluid intelligence (Raven’s matrices), or WMC (operation span), with p -values > 0.37. The separate fMRI experiment is not reported here. Experiment 2 included 51 students (27 girls, 24 boys) from natural science and social science programs in three upper secondary schools located in a municipality in a northern region of Sweden with a mean age of 18.13 years ( SD = 0.24). Recruitment of students was conducted in class by the authors, at each school. The natural science students were enrolled in more advanced math track compared with the Social science students; as in experiment 1, math tracks (basic, advanced) were subsequently entered as a factor in the analyses.

As in experiment 1, the cognitive testing included operation span and Raven’s matrices. The mean values and standard deviations of the operation span and Raven’s matrices were similar to those in experiment 1, for the operation span task ( M = 38.27, SD = 19.10) and Ravens matrices ( M = 14.47, SD = 5.34), respectively. The correlation between operation span and Raven’s matrices was found to be significant, with r = 0.52 and p < 0.001. A CP composite score was formed based on the operation span and Raven’s matrices scores. The CP score was used to split the students into a low CP group and a high CP group, and was also used as a factor in the subsequent analyses.

In a within-subject design, each student practice with 12 AR task sets and 12 CMR task sets. The corresponding 24 practice test tasks , together with eight transfer test tasks , were used as test tasks.

In this within-subject design, the students first practiced with 12 AR task sets. After a break of a few hours, they then practiced with 12 CMR task sets. This order was chosen to avoid carry-over effects from CMR tasks to AR tasks. The rationale was that starting with CMR tasks would reveal the underlying manipulation, which the students could then use to solve the AR tasks. Hence, constructing the solution without using the provided formula is the critical factor in the manipulation. To prevent item effects in which some tasks were more suitable to be designed as AR or CMR tasks, the tasks that were, respectively, assigned to be CMR and AR tasks were counterbalanced. The students were given 4 min to conclude each of the 12 task sets.

One week later, the students were asked to solve 24 randomly presented practiced test tasks (albeit with different numbers than before), of which 12 had been practiced as CMR tasks and 12 as AR tasks. These tasks were followed by eight transfer test tasks .

Statistical Analyses

A mixed-design ANOVA was conducted with learning condition (AR and CMR) and task type (practiced and transfer) as the within-subject factors and CP (low and high) and math tracks (basic and advanced) as the between-subject factors. The proportions of correct responses on practiced test tasks and transfer test tasks were entered as the dependent variables. Cohens d and partial eta square (η p 2 ) were used as index of effect sizes. Although a within-subject design was used, the more cognitively proficient students, who are likely to have better metacognitive ability (see Desoete and De Craene, 2019 for an overview), could potentially make use of constructive matching by comparing a possible solution with the response alternatives, response elimination by determining which answer is more likely, or of guessing ( Arendasy and Sommer, 2013 ; see also Gonthier and Roulin, 2020 ). Therefore, the analysis was corrected using the formula FS = R – W/C – 1, where FS = formula score; R = number of items/questions answered correctly; W = number of items/questions answered incorrectly; and C = number of choices per item/question (e.g., Diamond and Evans, 1973 ; Stenlund et al., 2014 ).

Table 2A displays the mean values of proportion correct response (not corrected for guessing), standard errors, and psychometric properties of skewness, kurtosis, and Cronbach’s alpha for the test tasks. An independent t -test of the CP composite score dividing the students into high CP groups and low CP groups showed that the students could be considered to be cognitively separated, t (49) = 12.14, p < 0.001. d = 3.40. The table shows that the mean values for CMR are higher than the corresponding values for AR learning condition for both the practiced test tasks and transfer test tasks . Table 2B display proportion correct responses (not corrected for guessing) for the test tasks divided according to their CP level. The statistical analysis corrected for guessing revealed significant within-subject effects of learning condition, with the CMR condition being superior to the AR condition, F (1,47) = 9.36, p = 0.004, Wilk’s Λ = 0.83, η p 2 = 0.17. However, there was no significant within-subject effect of task type, F (1,47) = 0.77, p = 0.38, Wilk’s Λ = 0.98, η p 2 = 0.012. Moreover, there was no significant between-subject effects of CP, F (1,47) = 0.23, p = 0.64, η p 2 = 0.004, or of math tracks, F (1,47) = 0.84, p = 0.36, η p 22 = 0.005, and there were no interaction effects ( p’s > 0.67).

The non-significant effect of CP was rather surprising; therefore, it was decided to re-run the analyses without the correction formula. The analyses again revealed a significant within-subject effect of learning condition, with the CMR condition being superior to the AR condition, F (1,47) = 7.80, p = 0.008, Wilk’s Λ = 0.85, η p 2 = 0.14. Again no significant within-subject effects from task type, F (1,47) = 2.3, p = 0.13, Wilk’s Λ = 0.95, η p 2 = 0.02 or between-subject effect of math tracks, F (1,47) = 3.45, p = 0.07, η p 2 = 0.07 were detected. However, the between-subject effect of CP was now clearly significant, F (1,47) = 18.74, p < 0.001, η p 2 = 0.28. Moreover, a learning condition × CP interaction F (1,47) = 9.05, p = 0.004, Wilk’s Λ = 0.83 η p 2 = 0.16 was qualified by a learning condition × task type × CP interaction, F (1,47) = 8.10, p = 0.005, Wilk’s Λ = 0.84, η p 2 = 0.16. The three-way interaction was driven by students with a high CP performing better in the CMR learning condition than in the AR learning condition especially pronounced for the transfer test tasks. No other interaction effects were detected ( p’s > 0.70).

With respect to both practiced test tasks and transfer test tasks , the analyses showed, as expected, that students who practiced with CMR had superior results on the subsequent tests 1 week later compared to the students who practiced with AR (confirming hypothesis 1 and 2). In comparison with experiment 1, experiment 2 showed notably higher performance levels, which most likely reflected the MC test format. Viewed in relation to previous studies of CMR (e.g., Jonsson et al., 2014 ) and the significant number of studies showing that educational attainments in math are intimately related to cognitive abilities (e.g., Adam and Hitch, 1997 ; Andersson and Lyxell, 2007 ), the non-significant effect of CP was unexpected. The finding that task type was non-significant, albeit in the direction of the practiced test task being easier than the transfer test tasks was also somewhat unexpected. It is possible that the eight transfer test tasks (four AR and four CMR) may have been too few to build reliable statistics. Although no significant effect was obtained for math tracks, the natural science students (advanced math track) performed better than the social science students (basic math track) on average; however, this trend did not reach statistical significance (disconfirming hypothesis 4). After the unexpected non-significant effect of CP, the analysis was re-run without the correction formula. The analysis revealed a main effect of CP (confirming hypothesis 3) and a learning condition × CP interaction that was qualified by a learning condition × task type × CP interaction. The three-way interaction indicates that cognitively stronger students could utilize response elimination or successful guessing in subsequent MC tests more effectively than their lower CP counterparts, especially for the transfer test tasks.

This design, in which the CMR practice tasks were presented shortly after the AR tasks, may have introduced a recency effect and thus facilitated the test performance more for CMR than for AR tasks. However, the CMR practice session contained 12 different task sets, and each new task set was a potential distractor for the previous task sets. Moreover, between the learning session and subsequent test 1 week later, the students attended their regular classes. These activities, viewed in conjunction with the well-known fact that the recency effect is rather transitory ( Koppenaal and Glanzer, 1990 ) and that recall is severely disrupted even by unrelated in-between cognitive activities ( Glanzer and Cunitz, 1966 ; Kuhn et al., 2018 ), probably eliminated the risk of recency effects. In experiment 2, the total number of test tasks was 32 (24 practiced test tasks and eight transfer test tasks ), and some students complained that there were too many tasks, which may have affected their performance, potentially the cognitively more proficient students were less affected by the large number test tasks.

Experiment 3

In experiment 3, the same hypotheses were posed as in experiments 1 and 2. However, as pointed out above, the more cognitively proficient students were potentially less affected by fatigue and gained more from using MC questions as a test format. Therefore, it was decided that experiment 3 should retain the within-subject design but use only transfer test tasks . Moreover, we reintroduced written answers as a response mode to prevent processes of constructive matching and response elimination. To reduce a potential, but unlikely, recency effect, the presentation order for a subsample was reversed, with CMR tasks being presented before AR tasks.

Experiment 3 included 82 students. The average age of participants was 17.35 years ( SD = 0.66), whereof 35 were girls, and 47 boys. The participants were from two upper secondary schools located in a municipality in a northern region of Sweden. Recruitment of students was conducted in class by the authors, at each school. The students were divided into two math tracks. The first was a mathematical track that included year 3 technical students and year 2 natural science students (advanced math tracks); these students were regarded by their schoolteachers as approximately equal in math skill background. The second math track consisted of year 1 natural science students and year 2 social science students (basic math track); these students were also regarded as approximately equal in math skill background. The students were subsequently analyzed according to their math tracks.

The cognitive tests were the same as in experiments 1 and 2. The mean values and standard deviations of the operation span ( M = 36.78, SD = 16.07) and Raven’s matrices ( M = 14.33, SD = 4.35) were similar to those from experiments 1 and 2. The correlation between operation span and Raven’s matrices was found to be significant, with r = 0.40 and p < 0.001, and a CP composite score based on operation span and Raven’s matrices scores was again formed, used to split the students into a low CP group and a high CP group, and used as a factor in the subsequent analyses.

The same practice tasks were used, as in experiment 2. In a within-subject design, the students practiced with 12 AR task sets and 12 CMR task sets, and 24 transfer test tasks were used during the test.

The students practiced with the same tasks and setup as in experiment 2, with the exception that the order of presentation was reversed for a subset of students, with AR tasks being practiced before CMR tasks. The students had 4 min to conclude each of the 12 task sets during practice. One week later, the students were asked to solve 24 transfer test tasks . The students were given 130 s to solve each test task.

The initial mixed-design ANOVA analysis, with learning condition (AR and CMR) as the within-subject factor and order of presentation as the between-subject variable and the proportion correct response as the dependent variable, investigated the potential presentation order × learning condition interaction. The analysis revealed that this interaction was non-significant, with F (1,80) = 0.22, p = 0.88, Wilk’s Λ = 0.10, η p 2 = 0.0004. Therefore, the presentation order was excluded from further analyses. Considering that the students differed in age (by approximately 1 year), we controlled for age by conducting a mixed-design analysis of covariance (ANCOVA) with learning condition (AR and CMR) as a within-subject factor and with CP (low and high) and math track (basic and advanced) as the between-subject factors. The proportion of correct responses on the transfer test tasks was entered as the dependent variable, and age was used as a covariate. Cohens d and partial eta square (η p 2 ) were used as index of effect sizes.

Table 3A displays the mean values, standard deviations, skewness, kurtosis, and Cronbach’s alpha for the test tasks. An independent t -test of the CP composite score used to divide the students into a high CP group and a low CP group showed that the students could be considered as cognitively separated, t (80) = 12.88, p < 0.001, d = 2.84. The table shows that practicing with the CMR tasks was superior to practicing with the AR tasks. Table 3B display proportion correct responses for the transfer test tasks divided according to their CP level. The statistical analysis confirmed a within-subject effect of learning condition, F (1,77) = 20.88, p < 0.001, Wilk’s Λ = 0.78, η p 2 = 0.21. The analysis also revealed a between-subject effect of CP, F (1,76) = 21.50, p < 0.001, η p 2 = 0.22. However, no between-subject effect of math tracks and no interaction effects were obtained, p’s > 0.15.

The findings from experiment 3 were in line with those from the previous experiments, providing evidence that practicing with CMR tasks was superior to practicing with AR tasks (confirming hypotheses 1 and 2). As expected, the analyses showed that the more cognitively proficient students outperformed those who were less cognitively proficient (confirming hypothesis 3). Again, no significant effect was obtained for math tracks (again disconfirming hypothesis 4).

General Discussion

This study contrasted CMR with AR across three experiments encompassing 270 students. It was hypothesized that practicing with CMR leads to better performances than practicing with AR on practiced test tasks and transfer test tasks (hypotheses 1 and 2). Experiments 1 and 2 included both practiced test tasks and transfer test tasks , while experiment 3 focused exclusively on transfer test tasks . The practiced test tasks were identical to the tasks that the students had practiced (albeit with different numbers). The transfer test tasks were different from the practice tasks , but they shared an underlying solution idea. To solve the transfer test tasks , the students had to rely on relevant knowledge (a solution idea) acquired during their practice, which is critical in mathematics. If a student has no solution idea to rely on, the transfer test tasks required the student to construct the method from scratch.

Moreover, this study hypothesized that the more cognitively proficient students would outperform those who were less cognitively proficient (hypothesis 3), independent of learning conditions. The upper secondary students were from different student programs with different mathematical backgrounds (i.e., basic and advanced math tracks), which was entered as a factor in the analyses. It was expected that those enrolled in a more advanced math track would outperform those enrolled in a basic math track (hypothesis 4).

Overall, the results confirmed hypotheses 1–3. However, no effects of math tracks were obtained, disconfirming hypothesis 4. Below, these hypotheses are discussed in detail.

Hypotheses 1 and 2

The analysis of both the practiced test tasks in experiment 1 followed the setup of Jonsson et al. (2014) , in which the dependent variables included trying to remember specific formulas and solving numerical practiced test tasks . Moreover, experiment 1 also went beyond Jonsson et al. (2014) and added transfer test tasks . The results of experiment 1 were in line with those of Jonsson et al. (2014) : Practicing with CMR tasks lead to significantly better performance on the practiced test tasks than practicing with AR tasks. Experiment 1 also found that practicing with CMR lead to significantly better performance on transfer test tasks . In experiment 2, we turned to a within-subject design, with the aim of removing potential non-equivalent group bias, and introduced MC questions as a test format, thereby challenging hypotheses 1 and 2 by using an easier test format. Again, significant CMR > AR effects were detected for both practiced test tasks and transfer test tasks . However, the fact that only four AR and four CMR transfer test tasks were used in experiment 2, the results could be questioned in terms of building adequate statistics. Therefore, using a within-subject design, experiment 3 focused solely on transfer test tasks , which increased the number of transfer test tasks and reduced the total number of tasks and, thus, the risk of fatigue. We also reintroduced written answers as a response mode to prevent processes of response elimination and guessing. The analysis of experiment 3 revealed that practicing with CMR tasks had a more beneficial effect than practicing with AR tasks on the transfer test tasks , again confirming hypothesis 2.

Hypothesis 3

When a short answer format was used, as in experiments 1 and 3, the effects of CP were clear, confirming previous studies and hypothesis 3. The second analysis in experiment 2 also confirmed hypothesis 3. The analysis showed that all participants improved their performance; hence the proportion correct was higher in experiment 2 than in experiments 1 and 3 ( Tables 1 – 3 ). This performance was most likely due to the MC response mode. The second analysis indicates that the cognitively more proficient students could in addition, use response elimination or successful guessing more effective ( Desoete and De Craene, 2019 ), thereby outperforming the cognitively less proficient. However, when the analysis was corrected for guessing (the first analysis), the benefits of using response elimination or guessing were removed, but the effects of the easier MC response mode remained, which even out the difference between the CP groups and thereby also removed the effect of CP.

Hypothesis 4

The non-significant effect of mathematical track was somewhat surprising, and disconfirmed hypothesis 4. A plausible interpretation is that the students enrolled in more advanced math tracks, which involve (according to the curriculum) better mathematical training, could not make use of their acquired mathematical knowledge when solving the novel experimental test tasks; if this interpretation is correct, it would indicate that the assumption of task novelty was also correct.

Overall, this study provides support for the argument that CMR facilitates learning to a greater degree than AR and confirms the results of previous studies ( Jonsson et al., 2014 , 2016 ; Norqvist et al., 2019 ). Although the effect sizes were rather small, they must be viewed in relation to the short interventions that the students went through. We argue that when students are practicing with CMR tasks, they are “forced” to pay attention to the intrinsic and relevant mathematical components, which develops their conceptual understanding. The effects on transfer test tasks indicate that practicing with CMR tasks—in comparison with practicing with AR tasks—facilitates students’ ability to transfer their knowledge to a greater extent; that is, they can better transfer their solution idea from the practice task to a different task sharing the same underlying solution idea (transfer test tasks ). This argument is in line with the findings of the Norqvist et al. (2019) eye-tracking study: When students practiced with AR tasks, they disregarded critical information that could be used to build a more in-depth understanding; in contrast, students that practiced with CMR tasks focused on critical information more frequently. Practice with CMR is most likely associated with more effortful struggle—an argument that shares similarities with the framework of “ill-structured tasks” ( Kapur, 2008 , 2010 ). In the ill-structured task approach, students are provided with tasks for which no method or procedure on how to solve the task is available and for which multiple solution paths may exist. Students are required to (try to) solve the ill-structured task by constructing their own methods before the teacher provides instructions on the mathematics to be learned ( VanLehn et al., 2003 ; Kapur, 2010 ). Those studies showed that the struggle of creating methods was especially beneficial for developing a conceptual understanding of the task, as demonstrated by significantly better performance on transfer test tasks (e.g., Kapur, 2010 , 2011 ). It is argued that the task complexity inherent in the ill-defined tasks was a key factor that helped students to create structures that facilitated their conceptual understanding of mathematics. Furthermore, studies have shown that the more solutions students generate on their own, the better the students’ test performance becomes, even when their methods do not fully solve the practice task ( Kapur, 2014 ). In the CMR tasks used in the present study, no instructions were given. Similar to the ill-structured approach, such tasks may identify knowledge gaps and enable (or “force”) students to search for and perceive in-depth structural problem features ( Newman and DeCaro, 2019 ). Although an excessively high cognitive load may hamper learning, a desirable amount of cognitive load in terms of struggle (in a positive sense) with mathematics may be beneficial for developing conceptual understanding ( Hiebert and Grouws, 2007 ). In the present study, such development of students’ conceptual understanding was seen in the form of better performance on the later test as a function of practicing with CMR tasks relative to AR tasks.

This study provides support for the theoretical link between the learning process using CMR, performance, and conceptual understanding. The results also underscore that although CP was associated with better performance, it did not interact with the learning condition. Hence, both cognitively stronger and cognitively weaker students benefited from using CMR relative to using AR. The theoretical framework ( Lithner, 2008 , 2017 ) could potentially be updated with an individual differences perspective with respect to cognitive prerequisites and their implication for the learning process. With respect to the non-significant effect of math tracks, the assumption of task novelty seems to be correct. Moreover, the non-significant effect of math tracks also indicates that students can gain conceptual understanding by using CMR even with tasks for which the students lack or have negligible pre-knowledge, and among students with “only” basic mathematical background.

The results from this study could be discussed from a self-explanation perspective (for an overview, see Rittle-Johnson et al., 2017 ). According to Rittle-Johnson et al. (2017) , the mechanism underlying self-explanation is the integration of new information with previous knowledge. This involves guiding students’ attention to the structural features—rather than the surface features—of the to-be-learned material, and can aid comprehension and transfer. In the CMR assumption, predictive arguments supporting strategy choice and verification arguments explaining why the strategy implementation and conclusions are “true or plausible” are regarded as critical features.

In sum, in the CMR/AR, ill-structured tasks, and self-explanation approaches, the critical aspects are how tasks are designed and how mathematical reasoning is supported. Moreover, in order to move beyond textbooks’ step-by-step solutions and understand the underlying ideas, students need to face (in a positive sense) mathematical struggle activities. Nevertheless, it is not likely that students will take on such effort by themselves. The framework of CMR and ill-structured tasks removes the task-solving methods and requires students to find an underlying idea and to create solutions on their own. Although CMR task solving is more cognitively demanding during practice than AR task solving, it helps the learner to focus on relevant information for solving the task. Moreover, similar to the self-explanation approach, the CMR approach guides students to the structural features that are critical for aiding comprehension.

Limitations

A limitation in the present study is that experiment 3 did not include any practiced test tasks . However, the results from experiments 1 and 2 indicate that using practiced test tasks in experiment 3 would have yielded the same conclusions as in experiments 1 and 2. A further potential limitation is that the presentation format differed in experiment 2 in comparison with experiments 1 and 3. However, it could in fact be argued that this is a strength of the study: Despite the different response formats for the test tasks, the experiments yielded similar results, with CMR consistently outperforming AR. Although the experiments were based on convenience samples, which could potentially narrow the external validity, the students were from four different upper secondary schools, which provided some heterogeneity. The results can also be discussed from the perspective of Hawthorne effects: The awareness of knowing that they were part of a study may have affected the students’ performance, and—although this is unlikely—the findings may not generalize to a regular setting when the researcher is not present.

Moreover, there were no pre-test measures in any of the experiments, as it was argued that a pre-test could make the students more or less responsive to the manipulation (see Pasnak, 2018 , for a discussion). On the other hand, pre-tests could have provided insight into how comprehension increased from the pre- to a post-test. In experiment 1, pre-tests would have provided a baseline of student performance, which could have been used to evaluate initial group differences.

Implications and Future Research

The results from the present and previous studies (e.g., Jonsson et al., 2014 , 2016 ; Norqvist et al., 2019 ) have implications for school settings, as AR tasks (as opposed to CMR tasks) are commonly used in teaching approaches and textbooks ( Stacey and Vincent, 2009 ; Denisse et al., 2012 ; Shield and Dole, 2013 ; Boesen et al., 2014 ; Jäder et al., 2019 ), but as argued do not promote optimal student learning. We argue that an eclectic perspective in which validated methods that emphasize mathematical struggles—such as task solving using CMR, ill-structured tasks, and self-explanations—should be a part of the mathematical curriculum, in conjunction with approaches that reduce cognitive load, such as worked examples. In future studies, it would be interesting not only to contrast CMR with other approaches, but also to investigate how to combine the CMR approach with, for example, self-explanation ( Rittle-Johnson et al., 2017 ) and, potentially, with worked examples as well ( Sweller et al., 2011 ). Another potential combination could involve retrieval practice, which is a cognitive-based learning strategy based on self-testing. At first glance, retrieval practice is very different from using CMR. Using CMR emphasizes the construction of solutions, while retrieval practice strengthens memory consolidation through the process of retrieving information from long-term memory. For example, retrieving the definition of working memory without the support of written text will enhance one’s ability to remember the definition across long-term retention intervals ( Wiklund-Hörnqvist et al., 2014 ). The performance difference between retrieval practice and other ways of attaining information—most commonly re-reading—is denoted as the “testing effect.” The testing effect is supported by both behavioral and functional fMRI evidence (for overviews, see Dunlosky et al., 2013 ; van den Broek et al., 2016 ; Adesope et al., 2017 ; Antony et al., 2017 ; Moreira et al., 2019 ; Jonsson et al., 2020 ). Research that currently underway shows that measures of brain activity following the testing effect (retrieval practice > study) and the “CMR effect” (CMR > AR) indicate that the same brain areas are activated. It is possible that by adding retrieval practice after formulas or procedures have been established by using CMR, the memory strength of specific formulas may be enhanced. Future studies are planned to pursue this reasoning.

Moreover, as stated in the limitation, the experiments in the present study were based on convenience samples. A purely randomized sampling or a stratified sampling would be preferable in future studies. It is also unclear whether the CMR approach is potent among students with special needs, although the non-significant effects of math tracks found in the present study were encouraging; future studies should pursue this question.

Data Availability Statement

Ethics statement.

The studies involving human participants were reviewed and approved by The Regional Ethics Committee at Umeå University, Sweden approved the study at University. Written informed consent from the participants’ legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.

Author Contributions

BJ, CG, and JL came up with the idea for the study and jointly contributed to the conceptualization and design of the study and revised the manuscript for important intellectual content. CG and BJ conducted the data collection. BJ performed the statistical analysis and wrote the first draft of the manuscript. All authors contributed to manuscript revision and read and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank Tony Qwillbard for data management and computer support, the students who took part in the experiments, and the teacher who allowed access to the schools.

Funding. Funding was received from the Swedish Research Council VR, Grant 2014–2099.

1 https://www.nctm.org/26T

2 www.skolverket.se

3 In the CMR group data from six participants were lost due to administrative error.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.574366/full#supplementary-material

  • Adam J., Hitch G. (1997). Working memory and ‘children’s mental addition. Working memory and arithmetic. J. Exp. Child Psychol. 67 21–38. 10.1006/jecp.1997.2397 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Adesope O. O., Trevisan D. A., Sundararajan N. (2017). Rethinking the Use of Tests: A Meta-Analysis of Practice Testing. Rev. Educ. Res. 87 659–701. 10.3102/0034654316689306 [ CrossRef ] [ Google Scholar ]
  • Alloway T. P. (2009). Working memory, but not IQ, predicts subsequent learning in children with learning difficulties. Eur. J. Psychol. Assess. 25 92–98. 10.1027/1015-5759.25.2.92 [ CrossRef ] [ Google Scholar ]
  • Andersson U., Lyxell B. (2007). Working memory deficit in children with mathematical difficulties: A general or specific deficit? J. Exp. Child Psychol. 96 197–228. 10.1016/j.jecp.2006.10.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Antony J. W., Ferreira C. S., Norman K. A., Wimber M. (2017). Retrieval as a Fast Route to Memory Consolidation. Trends Cognit. Sci. 21 573–576. 10.1016/j.tics.2017.05.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Arendasy M. E., Sommer M. (2013). Reducing response elimination strategies enhances the construct validity of figural matrices. Intelligence 41 234–243. 10.1016/j.intell.2013.03.006 [ CrossRef ] [ Google Scholar ]
  • Baddeley A. (2000). The episodic buffer: a new component of working memory? Trends Cognit. Sci. 4 417–423. 10.1016/s1364-6613(00)01538-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baddeley A. (2010). Working memory. Curr. Biol. 20 R136–R140. 10.1016/j.cub.2009.12.014 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barnett S. M., Ceci S. J. (2002). When and where do we apply what we learn: A taxonomy for far transfer. Psychol. Bull. 128 612–637. 10.1037/0033-2909.128.4.612 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bergqvist E. (2007). Types of reasoning required in university exams in mathematics. J. Math. Behav. 26 348–370. 10.1016/j.jmathb.2007.11.001 [ CrossRef ] [ Google Scholar ]
  • Blair C., Gamson D., Thorne S., Baker D. (2005). Rising mean IQ: Cognitive demand of mathematics education for young children, population exposure to formal schooling, and the neurobiology of the prefrontal cortex. Intelligence 33 93–106. 10.1016/j.intell.2004.07.00826T [ CrossRef ] [ Google Scholar ]
  • Boesen J., Helenius O., Bergqvist E., Bergqvist T., Lithner J., Palm T., et al. (2014). Developing mathematical competence: From the intended to the enacted curriculum. J. Math. Behav. 33 72–87. 10.1016/j.jmathb.2013.10.001 [ CrossRef ] [ Google Scholar ]
  • Butler A. C., Black-Maier A. C., Raley N. D., Marsh E. J. (2017). Retrieving and applying knowledge to different examples promotes transfer of learning. J. Exp. Psychol. 23 433–446. 10.1037/xap0000142 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Carroll J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge:Cambridge University Press. [ Google Scholar ]
  • Chiesi F., Ciancaleoni M., Galli S., Primi C. (2012). Using the Advanced Progressive Matrices (Set I) to assess fluid ability in a short time frame: An item response theory–based analysis. Psychol. Assess. 24 892–900. 10.1037/a0027830 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen J. (1992). A power primer. Psychol. Bull. 112 155–159. 10.1037/0033-2909.112.1.155 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coles A., Brown L. (2016). Task design for ways of working: making distinctions in teaching and learning mathematics. J. Math. Teacher Educ. 19 149–168. 10.1007/s10857-015-9337-4 [ CrossRef ] [ Google Scholar ]
  • De Smedt B., Janssen R., Bouwens K., Verschaffel L., Boets B., Ghesquière P. (2009). Working memory and individual differences in mathematics achievement: A longitudinal study from first grade to second grade. J. Exp. Child Psychol. 103 186–201. 10.1016/j.jecp.2009.01.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deary I. J., Strand S., Smith P., Fernandes C. (2007). Intelligence and educational achievement. Intelligence 35 13–21. 10.1016/j.intell.2006.02.001 [ CrossRef ] [ Google Scholar ]
  • Denisse R. T., Sharon L. S., Gwendolyn J. J. (2012). Opportunities to Learn Reasoning and Proof in High School Mathematics Textbooks. J. Res. Math. Educ. 43 253–295. 10.5951/jresematheduc.43.3.0253 [ CrossRef ] [ Google Scholar ]
  • Desoete A., De Craene B. (2019). Metacognition and mathematics education: an overview. ZDM 51 565–575. 10.1007/s11858-019-01060-w [ CrossRef ] [ Google Scholar ]
  • Diamond J., Evans W. (1973). The Correction for Guessing. Rev. Educ. Res. 43 181–191. 10.3102/00346543043002181 [ CrossRef ] [ Google Scholar ]
  • Dunlosky J., Rawson K. A., Marsh E. J., Nathan M. J., Willingham D. T. (2013). Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychol. Sci. Public Interest 14 4–58. 10.1177/1529100612453266 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Floyd R. G., Evans J. J., McGrew K. S. (2003). Relations between measures of cattell-horn-carroll (CHC) cognitive abilities and mathematics achievement across the school-age years [Review]. Psychol. Schools 40 155–171. 10.1002/pits.10083 [ CrossRef ] [ Google Scholar ]
  • Franks J. J., Bilbrey C. W., Lien K. G., McNamara T. P. (2000). Transfer-appropriate processing (TAP). Memory Cognit. 28 1140–1151. 10.3758/bf03211815 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gathercole S. E., Brown L., Pickering S. J. (2003). Working memory assessments at school entry as longitudinal predictors of National Curriculum attainment levels. Educat. Child Psychol. 20 109–122. [ Google Scholar ]
  • Geary D. C., Nicholas A., Li Y., Sun J. (2017). Developmental change in the influence of domain-general abilities and domain-specific knowledge on mathematics achievement: An eight-year longitudinal study. J. Educat. Psychol. 109 680–693. 10.1037/edu0000159 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glanzer M., Cunitz A. R. (1966). Two storage mechanisms in free recall. J. Verb. Learning Verb. Behav. 5 351–360. 10.1016/S0022-5371(66)80044-0 [ CrossRef ] [ Google Scholar ]
  • Gonthier C., Roulin J.-L. (2020). Intraindividual strategy shifts in ‘Raven’s matrices, and their dependence on working memory capacity and need for cognition. J. Exp. Psychol. General 149 564–579. 10.1037/xge0000660 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hiebert J. (2003). “ What research says about the NCTM Standards ,” in A research companion to principles andstandards for school mathematics , eds Kilpatrick J., Martin G., Schifter D. (Reston, VA:National Council of Teachers of Mathematics; ), 5–26. [ Google Scholar ]
  • Hiebert J., Grouws D. A. (2007). The effects of classroom mathematics teaching on ‘students’ learning. Second Handbook Res. Math. Teaching Learning 1 371–404. [ Google Scholar ]
  • Horner S., Rew L., Torres R. (2006). Enhancing intervention fidelity: a means of strengthening study impact. J. Spec. Pediatr. Nurs. 11 80–89. 10.1111/j.1744-6155.2006.00050.x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jäder J., Lithner J., Sidenvall J. (2019). Mathematical problem solving in textbooks from twelve countries. Int. J. Math. Educ. Sci. Technol. 51 1–17. 10.1080/0020739X.2019.1656826 [ CrossRef ] [ Google Scholar ]
  • Jonsson B., Kulaksiz Y. C., Lithner J. (2016). Creative and algorithmic mathematical reasoning: effects of transfer-appropriate processing and effortful struggle. Int. J. Math. Educat. Sci. Technol. 47 1206–1225. 10.1080/0020739x.2016.1192232 [ CrossRef ] [ Google Scholar ]
  • Jonsson B., Norqvist M., Liljekvist Y., Lithner J. (2014). Learning mathematics through algorithmic and creative reasoning. J. Math. Behav. 36 20–32. 10.1016/j.jmathb.2014.08.003 [ CrossRef ] [ Google Scholar ]
  • Jonsson B., Wiklund-Hörnqvist C., Stenlund T., Andersson M., Nyberg L. (2020). A learning method for all: The testing effect is independent of cognitive ability. J. Educ. Psychol. 2020 :edu0000627 10.1037/edu0000627 [ CrossRef ] [ Google Scholar ]
  • Kapur M. (2008). Productive failure. Cognit. Instruct. 26 379–424. [ Google Scholar ]
  • Kapur M. (2010). Productive failure in mathematical problem solving. Instruct. Sci. 38 523–550. 10.1007/s11251-009-9093-x [ CrossRef ] [ Google Scholar ]
  • Kapur M. (2011). A further study of productive failure in mathematical problem solving: unpacking the design components. Instr. Sci. 39 561–579. 10.1007/s11251-010-9144-3 [ CrossRef ] [ Google Scholar ]
  • Kapur M. (2014). Productive Failure in Learning Math. Cognit. Sci. 38 1008–1022. 10.1111/cogs.12107 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Koppenaal L., Glanzer M. (1990). An examination of the continuous distractor task and the “long-term recency effect”. Memory Cognit. 18 183–195. 10.3758/BF03197094 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kuhn J. R., Lohnas L. J., Kahana M. J. (2018). A spacing account of negative recency in final free recall. J. Exp. Psychol. 44 1180–1185. 10.1037/xlm0000491 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lee H. S., Anderson J. R. (2013). Student Learning: What Has Instruction Got to Do With It? Annu. Rev. Psychol. 64 445–469. 10.1146/annurev-psych-113011-143833 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lithner J. (2008). A research framework for creative and imitative reasoning. Educat. Stud. Math. 67 255–276. 10.1007/s10649-007-9104-2 [ CrossRef ] [ Google Scholar ]
  • Lithner J. (2017). Principles for designing mathematical tasks that enhance imitative and creative reasoning. ZDM 49 937–949. 10.1007/s11858-017-0867-3 [ CrossRef ] [ Google Scholar ]
  • Mac an Bhaird C., Nolan B. C., O’Shea A., Pfeiffer K. (2017). A study of creative reasoning opportunities in assessments in undergraduate calculus courses. Res. Math. Educ. 19 147–162. 10.1080/14794802.2017.1318084 [ CrossRef ] [ Google Scholar ]
  • Mackintosh N., Mackintosh N. J. (2011). IQ and human intelligence. Oxford:Oxford University Press. [ Google Scholar ]
  • McLean J. F., Hitch G. J. (1999). Working Memory Impairments in Children with Specific Arithmetic Learning Difficulties. J. Exp. Child Psychol. 74 240–260. 10.1006/jecp.1999.2516 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miyake A., Friedman N. P., Emerson M. J., Witzki A. H., Howerter A., Wager T. D. (2000). The Unity and Diversity of Executive Functions and Their Contributions to Complex “Frontal Lobe” Tasks: A Latent Variable Analysis. Cognit. Psychol. 41 49–100. 10.1006/cogp.1999.0734 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moreira B. F. T., Pinto T. S. S., Starling D. S. V., Jaeger A. (2019). Retrieval Practice in Classroom Settings: A Review of Applied Research. Front. Educ. 4 :005 10.3389/feduc.2019.00005 [ CrossRef ] [ Google Scholar ]
  • Newman P. M., DeCaro M. S. (2019). 2019/08/01/). Learning by exploring: How much guidance is optimal? Learning Instruct. 62 49–63. 10.1016/j.learninstruc.2019.05.005 [ CrossRef ] [ Google Scholar ]
  • Niss M. (2007). “ Reactions on the state and trends in research on mathematics teaching and learning ,” in Second handbook of research on mathematics teaching and learning ed. Lester I. F. (Mumbai:IAP; ). [ Google Scholar ]
  • Norqvist M. (2017). The effect of explanations on mathematical reasoning tasks. Int. J. Math. Educ. 49 1–16. 10.1080/0020739X.2017.1340679 [ CrossRef ] [ Google Scholar ]
  • Norqvist M., Jonsson B., Lithner J., Qwillbard T., Holm L. (2019). Investigating algorithmic and creative reasoning strategies by eye tracking. J. Math. Behav. 55 :008 10.1016/j.jmathb.2019.03.008 [ CrossRef ] [ Google Scholar ]
  • Pasnak R. (2018). To Pretest or Not to Pretest. Biomed. J. Sci. Technical Res. 5 :1185 10.26717/bjstr.2018.05.001185 [ CrossRef ] [ Google Scholar ]
  • Passolunghi M. C., Costa H. M. (2019). “ Working Memory and Mathematical Learning ,” in International Handbook of Mathematical Learning Difficulties: From the Laboratory to the Classroom , eds Fritz A., Haase V. G., Räsänen P. (Cham:Springer International Publishing; ). [ Google Scholar ]
  • Passolunghi M. C., Mammarella I. C., Altoè G. (2008). Cognitive Abilities as Precursors of the Early Acquisition of Mathematical Skills During First Through Second Grades. Dev. Neuropsychol. 33 229–250. 10.1080/87565640801982320 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pointon A., Sangwin C. J. (2003). An analysis of undergraduate core material in the light of hand-held computer algebra systems. Int. J. Math. Educ. Sci. Technol. 34 671–686. 10.1080/0020739031000148930 [ CrossRef ] [ Google Scholar ]
  • Primi R., Ferrão M. E., Almeida L. S. (2010). Fluid intelligence as a predictor of learning: A longitudinal multilevel approach applied to math. Learning Individu. Differ. 20 446–451. 10.1016/j.lindif.2010.05.001 [ CrossRef ] [ Google Scholar ]
  • Raghubar K. P., Barnes M. A., Hecht S. A. (2010). Working memory and mathematics: A review of developmental, individual difference, and cognitive approaches. Learning Individ. Differ. 20 110–122. 10.1016/j.lindif.2009.10.005 [ CrossRef ] [ Google Scholar ]
  • Raven J., Raven J. C., Court J. H. (2003). Manual for Raven’s Progressive Matrices and Vocabulary Scales . San Antonio, TX:Harcourt Assessment. [ Google Scholar ]
  • Rittle-Johnson B., Loehr A. M., Durkin K. (2017). Promoting self-explanation to improve mathematics learning: A meta-analysis and instructional design principles. ZDM 49 599–611. 10.1007/s11858-017-0834-z [ CrossRef ] [ Google Scholar ]
  • Shah P., Miyake A. (1996). The separability of working memory resources for spatial thinking and language processing: An individual differences approach. J. Exp. Psychol. 125 4–27. 10.1037/0096-3445.125.1.426T26T [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shield M., Dole S. (2013). Assessing the potential of mathematics textbooks to promote deep learning. Educat. Stud. Math. 82 183–199. 10.1007/s10649-012-9415-9 [ CrossRef ] [ Google Scholar ]
  • Silver E. (1997). Fostering creativity through instruction rich in mathematical problem solving and problem posing. ZDM 29 75–80. 10.1007/s11858-997-0003-x [ CrossRef ] [ Google Scholar ]
  • Stacey K., Vincent J. (2009). Modes of reasoning in explanations in Australian eighth-grade mathematics textbooks. Educat. Stud. Math. 72 271 10.1007/s10649-009-9193-1 [ CrossRef ] [ Google Scholar ]
  • Stenlund T., Sundström A., Jonsson B. (2014). Effects of repeated testing on short- and long-term memory performance across different test formats. Educat. Psychol. 36 1–18. 10.1080/01443410.2014.953037 [ CrossRef ] [ Google Scholar ]
  • Sweller J., Ayres P., Kalyuga S. (2011). “ Measuring Cognitive Load ,” in Cognitive Load Theory , ed. Sweller J. (New York, NY:Springer New York; ), 71–85. 10.1007/978-1-4419-8126-4_6 [ CrossRef ] [ Google Scholar ]
  • Szücs D., Devine A., Soltesz F., Nobes A., Gabriel F. (2014). Cognitive components of a mathematical processing network in 9-year-old children. Dev. Sci. 17 506–524. 10.1111/desc.12144 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taub G. E., Floyd R. G., Keith T. Z., McGrew K. S. (2008). Effects of General and Broad Cognitive Abilities on Mathematics Achievement [Article]. School Psychol. Quart. 23 187–198. 10.1037/1045-3830.23.2.187 [ CrossRef ] [ Google Scholar ]
  • Tulving E., Thomson D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychol. Rev. 80 352–373. 10.1037/h0020071 [ CrossRef ] [ Google Scholar ]
  • Unsworth N., Heitz R., Schrock J., Engle R. (2005). An automated version of the operation span task. Behav. Res. Methods 37 498–505. 10.3758/BF03192720 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Valentin Kvist A., Gustafsson J.-E. (2008). The relation between fluid intelligence and the general factor as a function of cultural background: A test of Cattell’s Investment theory. Intelligence 36 422–436. 10.1016/j.intell.2007.08.004 [ CrossRef ] [ Google Scholar ]
  • van den Broek G., Takashima A., Wiklund-Hörnqvist C., Karlsson Wirebring L., Segers E., Verhoeven L., et al. (2016). Neurocognitive mechanisms of the “testing effect”: A review. Trends Neurosci. Educ. 5 52–66. 10.1016/j.tine.2016.05.001 [ CrossRef ] [ Google Scholar ]
  • VanLehn K., Siler S., Murray C., Yamauchi T., Baggett W. B. (2003). Why Do Only Some Events Cause Learning During Human Tutoring? Cognit. Instruct. 21 209–249. 10.1207/S1532690XCI2103_01 [ CrossRef ] [ Google Scholar ]
  • Watkins M. W., Lei P. W., Canivez G. L. (2007). Psychometric intelligence and achievement: A cross-lagged panel analysis. Intelligence 35 59–68. 10.1016/j.intell.2006.04.00526T [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiklund-Hörnqvist C., Jonsson B., Nyberg L. (2014). Strengthening concept learning by repeated testing. Scand. J. Psychol. 55 10–16. 10.1111/sjop.12093 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiklund-Hörnqvist C., Jonsson B., Korhonen J., Eklöf H., Nyroos M. (2016). Untangling the Contribution of the Subcomponents of Working Memory to Mathematical Proficiency as Measured by the National Tests: A Study among Swedish Third Graders. Front. Psychol. 7 :1062. 10.3389/fpsyg.2016.01062 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wirebring L. K., Wiklund-Hörnqvist C., Eriksson J., Andersson M., Jonsson B., Nyberg L. (2015). Lesser Neural Pattern Similarity across Repeated Tests Is Associated with Better Long-Term Memory Retention. J. Neurosci. 35 9595–9602. 10.1523/jneurosci.3550-14.2015 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Cambridge University Faculty of Mathematics

Or search by topic

Number and algebra

  • The Number System and Place Value
  • Calculations and Numerical Methods
  • Fractions, Decimals, Percentages, Ratio and Proportion
  • Properties of Numbers
  • Patterns, Sequences and Structure
  • Algebraic expressions, equations and formulae
  • Coordinates, Functions and Graphs

Geometry and measure

  • Angles, Polygons, and Geometrical Proof
  • 3D Geometry, Shape and Space
  • Measuring and calculating with units
  • Transformations and constructions
  • Pythagoras and Trigonometry
  • Vectors and Matrices

Probability and statistics

  • Handling, Processing and Representing Data
  • Probability

Working mathematically

  • Thinking mathematically
  • Mathematical mindsets
  • Cross-curricular contexts
  • Physical and digital manipulatives

For younger learners

  • Early Years Foundation Stage

Advanced mathematics

  • Decision Mathematics and Combinatorics
  • Advanced Probability and Statistics

Going Deeper: Achieving Greater Depth in the Primary Classroom

As a mastery approach becomes increasingly common in many schools, we have been working with teachers on a project to uncover effective ways to support their learners working 'at greater depth'. In this article, we will share our key findings and some of our favourite whole-class teaching resources which offer opportunities for teaching 'at greater depth'. (Several of these are linked from the NCETM's resources which exemplify their 'ready to progress criteria'.)

Teaching at 'greater depth'

One of the challenges facing many teachers is engaging all learners throughout the lesson when adopting a whole-class teaching approach. As guidance issued to schools indicates, "It is inevitable that some pupils will grasp concepts more rapidly than others and will need to be stimulated and challenged to ensure continued progression” (Askew et al., 2015, p.6).

Aware of these concerns, the guidance adopted the phrase ‘mastery at greater depth’ and suggested that pupils working at this level should be able to:

• solve problems of greater complexity (i.e. where the approach is not immediately obvious), demonstrating creativity and imagination;

• independently explore and investigate mathematical contexts and structures, communicate results clearly and systematically explain and generalise the mathematics.

(Askew et al. 2015, p.7)

The requirement for learners working 'at greater depth' to investigate more complex problems, work independently and communicate their ideas to others are the very foundations of our Low Threshold High Ceiling (LTHC) approach at NRICH: a low threshold ensures that every learner can get started on a problem and the high ceiling ensures that there is a suitable level of challenege, often unseen, for the learner to consider if and when they are ready to do so.

We teamed up with a group of schools based in Tower Hamlets, who were participating in a joint professional development programme with NRICH and the Tower Hamlets Education Partnership, to invesigate ways to maximise the potential of our curriculum-mapped LTHC resources for teaching 'at greater depth'. In this article we share some of their top tips.

  • Be prepared to be surprised

difference between reasoning and problem solving maths

During a visit, we joined a very engaged class who quickly got started on the challenge. Working in pairs, with their teacher taking reguar opportunities to pause the groups and allow time to share their ideas with the wider class, the learners began to organise their dominoes so that they could more easily check which, if any, were missing. Some learners who had completed the initial task were further challenged to consider how many dominoes they might have if the maximum number of dots on each side was increased to seven:

Teacher: We’re doing the seven row. Prove to me that there’s eight in the seven row.

Learner: It will only be like this, seven zero add one, seven one, seven two, seven three, seven four, seven five, seven six, seven seven but you can’t go over that.

Teacher:  So when you get to the seven seven you can’t go any further. So you proved it. There’s no doubles and you can’t go any further than seven seven, can you?

The learning did not stop there, the lesson went much deeper as the learners were encouraged by their teacher to set themselves the challenge of thinking about sets of dominoes which go up to 'double 10', or 'double 20' or even 'double 100'. Working with these much larger domino sets encouraged those working 'at greater depth' to generalise their ideas, and their enthusiasm led them to continue working through their break time! 

Reflecting on the lesson, the teacher noted that one member of that day's 'at greater depth' group had previously struggled to engage with their mathematics:

What’s great about NRICH activities is that it just shows children can shine even if we’ve got them in a box under 'expected'. That’s what I love about it... You know, he completely steps out of the box and he’s shining.

Clearly, one of the many benefits of working with LTHC resources is their potential for challenging all learners and enabling them to reveal their potential. Be prepared to be surprised.

Encourage a growth mindset

No-one enjoys trying to do too much at once, any activity can become frustrating if it is too far beyond our comfort zone. This is true in our classrooms too. In our mathematics lessons, our curriculum challenges teachers to develop fluency, reasoning and problem-solving skills, and trying to find a balance within a particular lesson can be a challenge (this is an aspect we explore more fully in this article by Clare Lee and Sue Johnston-Wilder). By choosing LTHC activities, learners can easily get started on a task, building their confidence and willingness to engage with further challenges later on.

difference between reasoning and problem solving maths

One of the strategies we used to solve the coded 100 chart was to look for patterns. We did this by looking in each column and finding that the symbols end with the same shape. For example, the first column of numbers always end in a rhombus shape. The same thing with the rows. Each row, we noticed while putting the symbols together, begins with the same shape. To solve this problem, we think it's best to work with the shape you first put in the 100 chart, and keep building off of that one. The first piece you out in the 100 chart works best if it fits into a corner of the chart.

In contrast, Nathan adoted a very different approach:

Start by placing any one piece and think which one will fit. Before putting it there, think if any other shape could fit.  Then, if not try another shape that could fit. After the one you choose is put on, repeat until the coded hundred chart is filled.  If you mess up, click on pieces to show the full piece and see if you can change a piece for another piece. Remember the first strategy to check if another one can fit after that one and then repeat again and again until you're done. After I fit the pieces together, I looked down each column. I checked that all the symbols in the ones placed matched. I looked across each row and checked that the symbols in the ones places were in order (1,2,3,4...).

  • Celebrate mathematical thinking

For learners to be working 'at greater depth', they need to be communicating their ideas clearly to others. In our project schools, the teachers often encouraged their learners to reflect on their problem-solving by recording their ideas in a class book. These books were proudly shared with visitors on arrival to their classes, including the NRICH team. Although we cannot visit every classroom, we do enjoy reading solutions to our problems. For example, Jordan, Juoiana and Nathan all submitted their ideas about the Coded Hundred Square to the team and their ideas were published on NRICH. Your classes are very welcome to share their ideas about our problem-solving tasks too, simply visit our Live Problems page for our very latest opportunities to communicate mathematically with the team. We publish a selection of the submissions we receive on our website.

There are 10 leaves per twig There are 10 twigs per branch 10 leaves x 10 twigs = 100 leaves per branch There are 10 branches per trunk 100 leaves x 10 branches = 1000 leaves per trunk There are 10 trunks per tree 1000 leaves x 10 trunks = 10 000 leaves on the tree

Cut off one trunk: 10 000 - 1000 = 9000 leaves left Cut off one branch: 9000 - 100 = 8900 leaves left Cut off one twig: 8900 - 10 = 8890 leaves left Pull off one leaf: 8890 - 1 = 8889 leaves left

There are 8889 leaves left on the tree.

Another learner called Rachel seemed to approach the problem in the same way as Kirsty, but she found the total number of leaves which had been pulled off the tree before finding the total number of leaves on the tree initially. 

Submitting solutions often leads to published solutions, and the teachers in our project schools also reported on the usefulness of the solutions accompanying our resources. Some teachers set aside time the following day after introducing an NRICH problem to their classes to enable their learners to compare their solutions with those from other classes:

I really like the solution thing now that I know how to use it and things. Getting the language out of it and stuff and using it after they’ve done it. Maybe tomorrow... we might start by looking at the solution that was there... Everyone will understand it tomorrow and then we can really, you know, look at it and decide whether we like it and pull it apart.

Allow time for you to think mathematically too

Another key finding from our project teachers was the importance of setting aside time to explore each LTHC activity for themselves before the lesson. This approach enabled the teachers to consider the possibilities for extending the learners where needed, to enable teaching 'at greater depth', but it was often seen as a thoroughly enjoyable experience too:

I mean, I consider myself greater depth, okay? ... I do have, you know, quite a high level of maths. So, basically, if I have a greater depth and I’m enjoying it and taking it on and forward progressing, clearly those children can do the same, can’t they?

Our school-based research revealed these four 'tips' for schools to support their learners working 'at greater depth' using NRICH tasks:

  • Encourage a growth mindset
  • Allow time for you to think mathematically too.

We hope that reading this article will inspire readers to explore the potential of using NRICH tasks to support their own learners working 'at greater depth'.

This project would not have beeen possible without the generous support of the team at the Tower Hamlets Education Partnership and the teachers who welcomed the NRICH team into their classrooms. We were delighted to share our findings at the British Society for Research into Learning Mathematics Day (BSRLM) Conference in November 2020 - you can access a copy of the accompanying research paper which was published in the BSRLM's Proceedings  here .

Askew, A., Bishop, S., Christie, C., Eaton, S., Griffin, P. and Morgan, D. (2015).  Teaching for Mastery: Questions, tasks and activities to support assessment . Oxford University Press.

Lee, C. and Johnson-Wilder, S. (2018). Getting into and staing in the Growth Zone. Retrieved from https://nrich.maths.org/13491

IMAGES

  1. White Rose Maths

    difference between reasoning and problem solving maths

  2. Developing Reasoning Skills in Maths for KS2

    difference between reasoning and problem solving maths

  3. Fluency, Reasoning and Problem Solving: What They REALLY Look Like

    difference between reasoning and problem solving maths

  4. Developing Maths Reasoning In KS2: How To Teach The Skills Effectively

    difference between reasoning and problem solving maths

  5. What IS Problem-Solving?

    difference between reasoning and problem solving maths

  6. What IS Problem-Solving?

    difference between reasoning and problem solving maths

VIDEO

  1. REASONING & PROBLEM SOLVING

  2. Difference between Reasoning & Analysis(तर्क एवं विश्लेषण मे अंतर)IB.Ed

  3. Difference between Reasoning and analysis in b.ed 2nd year || knowledge and curriculum important que

  4. Can You Solve This Year 6 SATs Question? #mathsrevision

  5. math word problem solving #mathopedia #mathskills #mathstricks #mathshack #mathematics

  6. MATH & REASONING STRATEGY For COMBINED PRELIM EXAM-2024

COMMENTS

  1. Fluency, Reasoning and Problem Solving: What They REALLY Look Like

    Put more simply, mathematical reasoning is the bridge between fluency and problem solving. It allows pupils to use the former to accurately carry out the latter. Read more: Developing maths reasoning at KS2: the mathematical skills required and how to teach them. What is problem solving in maths?

  2. Fluency, reasoning and problem solving in primary maths

    Problem solving is an important skill for all ages and abilities and, as such, needs to be taught explicitly. It is therefore useful to have challenges like these at the end of every lesson. Secondly, verbal reasoning demonstrates that pupils understand the maths. Talk is an integral part of mastery as it encourages students to reason, justify ...

  3. Mathematical Reasoning & Problem Solving

    Students frequently complain that mathematics is too difficult for them, because it is too abstract and unapproachable. Explaining mathematical reasoning and problem solving by using a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and concrete models can help students understand the problem better by ...

  4. The role of reasoning in supporting problem solving and fluency

    The role of reasoning in supporting problem solving and fluency. November 3, 2020 Oxford Primary 1 Comment. A recent webinar with Mike Askew explored the connection between reasoning, problem solving and fluency. This blog post summaries the key takeaways from this webinar. Using reasoning to support fluency and problem solving.

  5. Problem Solving, Using and Applying and Functional Mathematics

    During problem solving, solvers need to communicate their mathematics for example by: discussing their work and explaining their reasoning using a range of mathematical language and notation. using a variety of strategies and diagrams for establishing algebraic or graphical representations of a problem and its solution.

  6. Creative and Critical Thinking in Primary Mathematics

    Most primary teachers think of problem solving, one of the four mathematics proficiencies where children inquire into real world problems or solve open tasks. However mathematical reasoning, the fourth proficiency in the mathematics curriculum, is often overlooked by primary teachers but fits very neatly with creative and critical thinking.

  7. Reasoning: the Journey from Novice to Expert (Article)

    Step three: Convincing: confident that their chain of reasoning is right and may use words such as, 'I reckon' or 'without doubt'. The underlying mathematical argument may or may not be accurate yet is likely to have more coherence and completeness than the explaining stage. This is called inductive reasoning.

  8. What Is Problem Solving?

    What Is Problem Solving? In this article I model the process of problem solving and thinking through a problem. The focus is on the problem solving process, using NRICH problems to highlight the processes. Needless to say, this is not how problems should be taught to a class! What is problem solving?

  9. Reasoning and Problem Solving

    Abstract. In this chapter we present what is known about reasoning and problem solving, what is currently being done, and in what directions future conceptualizations, research, and practice are likely to proceed in the psychological literature. In our discussion, we attempt to clarify the distinction between reasoning and problem solving ...

  10. Learning and thinking differences that cause trouble with math

    Talk about how the next problem is similar and how it's different. Give reminders about strategies used to solve similar problems. A note about math anxiety. Kids with math anxiety can get so worried about doing math that they do poorly on math tests. Learn the difference between math anxiety and dyscalculia.

  11. Mathematics Improves Your Critical Thinking and Problem-Solving

    Mathematics provides a systematic and logical framework for problem-solving and critical thinking. The study of math helps to develop analytical skills, logical reasoning, and problem-solving abilities that can be applied to many areas of life.By using critical thinking skills to solve math problems, we can develop a deeper understanding of concepts, enhance our problem-solving skills, and ...

  12. 2.10: Problem Solving and Decision Making

    The four steps to effective problem solving are the following: Define the problem; Narrow the problem; Generate solutions; Choose the solution; Brainstorming is a good method for generating creative solutions. Understanding the difference between the roles of deciding and providing input makes for better decisions.

  13. Reasoning and Problem Solving

    Abstract. This chapter provides a revised review of the psychological literature on reasoning and problem solving. Four classes of deductive reasoning are presented, including rule (mental logic) theories, semantic (mental model) theories, evolutionary theories, and heuristic theories. Major developments in the study of reasoning are also ...

  14. Students' Mathematical Reasoning, Communication, and Language

    Tasks should engage students and encourage them to deploy all of their relevant resources and personal knowledge to problem solving. Mathematics is defined by a combination of natural language, symbolism, models, and visual displays for expressing ideas; and as such, the discipline is multisemiotic (O'Halloran, 2015).

  15. Build maths fluency with a virtuous cycle of problem solving

    The ultimate goal of teaching mathematics is to create thinkers. Making the most of the fluency virtuous cycle helps learners to do so much more than just recall facts and memorise procedures. In time, your learners will be able to work fluently, make connections, solve problems, and become true mathematical thinkers. References. Jo Boaler (2014).

  16. Mastering Mathematics and Problem Solving

    It is interesting to note that whilst the new National Curriculum (DfE, 2013) clearly specifies its three aims of developing fluency, reasoning and problem solving in mathematics, it does not specifically refer to a 'mastery' approach. We appreciate that the current mastery approach encompasses two key aspects of mathematical learning ...

  17. PDF Relation Between Mathematical Proof Problem Solving, Math Anxiety, Self

    1.3 Backward Reasoning Problem solving strategy is a technique that does not guarantee a solution to a problem but helps students understand the problem and approach the solution (Gick, 1986). In mathematics education, problem solving strategies are referred to as heuristics. Pioneered by Polya (1945), problem solving strategies have received much

  18. Conceptual & Procedural Math: What's the Difference?

    It is often contrasted with "procedural math," which teaches students to solve problems by giving them a series of steps to do. Procedural math approaches an elementary problem such as two-digit subtraction (72 − 69, say) by teaching students to "borrow."Since you can't subtract 9 from 2, strike through the 7 next to the 2, turn it ...

  19. PDF Scientific Reasoning and Its Relationship with Problem Solving: the

    Int J of Sci and Math Educ (2016) 14:1003-1019 DOI 10.1007/s10763-015-9646-1 * Wajeeh M. Daher ... elementary teachers' proficiencies in problem solving and reasoning using an instrument ... nonsignificant differences in problem-solving skills, but significant differences in critical thinking dispositions for students with ...

  20. Development and differences in mathematical problem-solving skills: A

    1. Introduction. Problem-solving skills are a complex set of cognitive, behavioral, and attitudinal components that are situational and dependent on thorough knowledge and experience [1,2].Problem-solving skills are acquired over time and are the most widely applicable cognitive tool [].Problem-solving skills are particularly important in mathematics education [3,4].

  21. Gaining Mathematical Understanding: The Effects of Creative

    Introduction. Supporting students' mathematical reasoning and problem-solving has been pointed out as important by the National Council of Teachers of Mathematics (NCTM; 26T 1).This philosophy is reflected in the wide range of mathematics education research focusing on the impact different teaching designs might have on students' reasoning, problem-solving ability, and conceptual ...

  22. Going Deeper: Achieving Greater Depth in the Primary Classroom

    In our mathematics lessons, our curriculum challenges teachers to develop fluency, reasoning and problem-solving skills, and trying to find a balance within a particular lesson can be a challenge (this is an aspect we explore more fully in this article by Clare Lee and Sue Johnston-Wilder). By choosing LTHC activities, learners can easily get ...

  23. PDF Chapter 111. Texas Essential Knowledge and Skills for Mathematics

    (B) use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution and e, valuating the problem-solving process and the reasonableness of the solution; (C) select tools, including real objects, manipulatives, paper and pencil, and technology as