Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

Heuristic evaluation: Definition, case study, template

to state that a case study is heuristic means that

Imagine yourself faced with the challenge of assembling a tricky puzzle but not knowing where to start. Elements such as logical reasoning and meticulous attention to detail become essential, requiring an approach that goes beyond the surface level to achieve effectiveness. When evaluating the user experience of an interface, it is no different.

Heuristic Evaluation UX

In this article, we will cover the fundamental concepts of heuristic evaluation, how to properly perform a heuristic evaluation, and the positive effects it can bring to your UX design process. Let’s learn how you can solve challenges with heuristic evaluation.

What is heuristic evaluation?

The heuristic evaluation principles, understanding nielsen’s 10 usability heuristics, essential steps in heuristic evaluation, prioritization criteria in the analysis of usability problems, communicating heuristic evaluation results effectively, dropbox’s heuristic evaluation approach, incorporating heuristic evaluation into the ux process.

The heuristic evaluation method’s main goal is to evaluate the usability quality of an interface based on a set of principles, based on UX best practices. From the identification of the problems, it is possible to provide practical recommendations and consequently improve the user experience.

So where did heuristic evaluation come from, and how do we use these principles? Read on.

Heuristic evaluation was created by Jakob Nielsen, recognized worldwide for his significant contributions to the field of UX. The method created by Nielsen is based on a set of heuristics from human-computer interaction (HCI) and psychology to inspect the usability of user interfaces.

Therefore, Nielsen’s 10 usability heuristics make up the principles of heuristic evaluation by establishing carefully established foundations. These foundations serve as a practical guide to cover the main usability problems of projects. These heuristics work as cognitive shortcuts used by the brain for efficient decision making, especially in redesign projects. Heuristics also help to complement the UX process when understanding user problems and supporting UX research and evaluation.

When you are getting ready to conduct a heuristic evaluation, the first step is to set clear goals. Then, during the evaluation, you should make notes on what you find considering usability issues, always based on the criteria. Once this is done, you can prepare a report that might include information on which issues to tackle first, which makes the evaluation even better. All these steps matter because they help make sure interfaces match what users want and expect, leading to better interactions overall.

Preparation for the heuristic evaluation: Defining usability objectives and criteria

As with the puzzle example in the intro, fully understanding the problem is critical to applying heuristic evaluation effectively. Thus, during the preparation phase, you need to establish the evaluation criteria, also defining how these criteria will be evaluated.

Select evaluators based on their experience. By involving a diverse set of evaluators, you can obtain different perspectives on the same challenge. Although an expert is able to point out most of the problems in a heuristic evaluation, collaboration is essential to generate more comprehensive recommendations.

Although it follows a set of heuristics, the evaluation is less formal and less expensive than a user test, making it faster and easier to conduct. Therefore, heuristic evaluation can be performed in the early stages of design and development when making changes is more cost effective.

Nielsen’s usability heuristics are like a tactical set for methodically making things work, providing valuable clues that designers and creators follow to piece together the usability puzzle. These heuristics act as master guides, helping us intelligently fit each piece of the puzzle together so that everything makes sense and is easy to understand to create amazing experiences in the products and websites we use.

to state that a case study is heuristic means that

Over 200k developers and product managers use LogRocket to create better digital experiences

to state that a case study is heuristic means that

Here are Nielsen’s 10 usability heuristics, each with its own relevance and purpose:

1. System status visibility

Continuously inform the user about what is happening.

Mac Loading Icon

2. Correspondence between the system and the real world

Use words and concepts familiar to the user.

Yahoo Search Bar

3. User control and freedom

Allow users to undo actions and explore the system without fear of making mistakes.

Gmail Undo Trash

4. Consistency and standards

Maintain a consistent design throughout the system, so users can apply what they learned in one part to the rest.

ClickUp Management System

5. Error prevention

Design in a manner that prevents users from committing mistakes or provides means to easily correct wrong decisions.

Confirm Deletion

6. Recognition instead of memorization

Provide contextual hints and tips to help users accomplish tasks without needing to remember specific information.

Siri Listening

7. Flexibility and efficiency of use

Allow users to customize keyboard shortcuts or create custom profiles to streamline their interactions.

Adobe Photoshop Undo

8. Aesthetics and minimalist design

Keep the design clean and simple, focusing on the most relevant information to avoid overwhelming users using proper spacing, colors, and typography.

Airbnb Website

9. Help and documentation

Provide helpful and accessible support in case users need extra guidance.

WhatsApp Help Center

10. User feedback

Give immediate feedback to users when they take an action.

H&M Checkout Confirmation

Together, these pieces of the usability heuristics puzzle help us build a complete picture of digital experiences. Thus, by following these guidelines, evaluators can identify problems and prioritize them for correction at the evaluation stage.

In the evaluation phase, evaluators should look at the product or system interface and document any usability issues based on heuristics. By using heuristics consistently across different parts of the interface, it is still possible to balance conflicting heuristics to find optimal design solutions.

There may be challenges during the evaluation phase, which is why it is important that evaluators suggest strategies to overcome them from the definition of priorities. Evaluators should therefore, in consensus, discuss how these heuristics can be applied to identify and address usability problems.

One of the interesting ways to do heuristic evaluation is through real-time collaboration tools like Miro. On the template below, you will be able to collaborate in real-time to conduct heuristic evaluations of your project with your team, evaluating the problems by criteria and dividing them by colors, based on the level of complexity to be solved.

Heuristic Evaluation Template

You can download the Miro Heuristic Evaluation template for free .

After performing a heuristic assessment, evaluators should analyze the findings and prioritize usability issues, trying to identify the underlying causes of usability issues rather than just addressing surface symptoms.

Usability issues discovered during the assessment can be given severity ratings to prioritize fixes.

Below is an example of categorization by severity according to the challenge presented:

  • High severity : Prevents the user from performing one or more tasks
  • Medium severity : Requires user effort and affects performance
  • Low severity : May be noticeable to the user but does not impede execution or performance

The classification will help the team to have greater clarity regarding what is most relevant to be faced considering the impact on the user experience. By prioritizing the most critical issues based on their impact on the user experience, it will be easier to effectively allocate them throughout the project.

Finally, during the reporting phase, evaluators should present their findings and recommendations to stakeholders and facilitate discussions on identified issues.

Evaluators typically conduct multiple iterations of the assessment to uncover different issues in subsequent rounds based on the need for the project and the issues identified.

Heuristic evaluation provides qualitative data, making it important to interpret the results with a deeper understanding of user behavior. When reporting and communicating the results of a heuristic assessment, assessors should follow best practices by presenting findings in visual representations that are easy to read and understand, and that highlight key findings, whether using interactive boards, tables or other visuals.

Problem descriptions should be clear and concise so they can be actionable. Instead of generating generic problems, for example, break the problems into distinct parts to be easier to deal with. If necessary, try to analyze the interface component and its details, thinking not only analytically in an abstract way but also understanding that that problem will be solved by a UX Designer, considering all its elements. In this scenario, a well-applied context makes all the difference.

It is also important to involve stakeholders and facilitate discussions around identified issues. As a popular saying goes: a problem communicated is a problem half solved.

The Dropbox team really nails it when it comes to giving users a smooth and user friendly experience. Let’s dive into a few ways they have put these heuristic evaluation principles to work in their platform:

Dropbox keeps things clear by using concise labels to show the status of your uploaded files. They also incorporate a convenient progress bar that provides a time estimate for the completion of the upload. This real-time feedback keeps you informed about the ongoing status of your uploads on the platform:

Heuristic Applied

The ease of moving, deleting, and renaming files between different folders and sharing with other people means that Dropbox offers users control over fundamental actions, allowing them to work in a personalized way, increasing their sense of ownership:

Making it a breeze for users to navigate no matter if they’re on a computer or a mobile device, Dropbox keeps things consistent in both: website and mobile app design:

Dropbox Across Mediums

To prevent errors from happening, Dropbox has implemented an interesting feature. If a user attempts to upload a file that’s too large, Dropbox triggers an error message. This message is quite helpful, as it guides the user to select a smaller file and clearly explains the issue. It’s a nifty feature that ensures users know exactly which steps to take next:

Error Prevention

Dropbox cleverly employs affordances to ensure that users can easily figure out how to navigate the app. Take, for instance, the blue button located at the top of the screen — it’s your go-to for creating new files and folders. This is a familiar and intuitive pattern that users can quickly grasp:

Dropbox Navigation

Consider now flexibility and efficiency. On Dropbox, the user can access their files from any gadget, and they can keep working even when they are offline without worrying about losing anything. It makes staying productive a breeze, no matter where the users finds themselves:

Dropbox Access

Dropbox has a clean and minimalist design that’s a breeze to use and get around in. Plus, it’s available in different languages, ensuring accessibility for people all around the world:

Dropbox Design

Dropbox goes the extra mile by using additional methods alongside heuristic evaluation, demonstrating a high positive impact on their services. All this dedication to applying the best heuristics on their products has made Dropbox one of the most popular storage services globally.

Heuristic evaluation fits into the broader UX design process and can be conducted iteratively throughout the design lifecycle, despite being commonly used early in the design process.

It provides valuable insights to inform design decisions and improvements and enables UX designers to effectively identify and address usability issues.

Conclusion and key takeaways

In this article, we have seen that heuristic evaluation is a systematic and valuable approach to identifying usability problems in systems and products. Through the use of general usability guidelines, it is possible to highlight gaps in the user experience, addressing areas such as clarity, consistency and control. This evaluation is conducted by a multidisciplinary team, and the problems identified are recorded in detail, allowing for further prioritization and refinement.

Much like a complex puzzle, improving usability and user experience requires identifying patterns and providing instructive feedback when working collaboratively.

Checking interfaces using heuristic evaluation can uncover many issues, but it’s not a replacement for what you learn from watching actual users. Think of it as an extra tool to understand users better.

Remember that heuristic evaluation not only reveals challenges but also empowers you as a UX professional to create more intuitive and impactful solutions.

When you mix in heuristic evaluation while making your designs, you can end up with products and systems that are more helpful and user-friendly without spending too much. This helps make your products or services even better by following good user experience tips.

So don’t hesitate: make the most of the potential of heuristic evaluation to push usability to the next level in your UX project.

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

to state that a case study is heuristic means that

Stop guessing about your digital experience with LogRocket

Recent posts:.

Concept Testing

Using concept testing for better product design

Concept testing is all about validating the product concept in its early phases to avoid investing in ideas that are doomed to fail.

to state that a case study is heuristic means that

Creating design specs for smoother developer handoff

Designers and developers sometimes have hiccups during a project. These guidelines can help avoid roadblocks and improve communication.

to state that a case study is heuristic means that

What is cognitive overload in UX and how can you prevent it?

Explore the psychology behind cognitive overload and learn UX strategies to help reduce it, with a focus on crafting user-centric solutions.

to state that a case study is heuristic means that

Figma Dev Mode: What it can and can’t do for a UX designer

Figma Dev Mode has an easy-to-work-with UI and layout and makes collaboration with other team members far easier.

to state that a case study is heuristic means that

Leave a Reply Cancel reply

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Are Heuristics?

These mental shortcuts can help people make decisions more efficiently

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

to state that a case study is heuristic means that

Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital.

to state that a case study is heuristic means that

Verywell / Cindy Chung 

  • History and Origins
  • Heuristics vs. Algorithms
  • Heuristics and Bias

How to Make Better Decisions

Heuristics are mental shortcuts that allow people to solve problems and make judgments quickly and efficiently. These rule-of-thumb strategies shorten decision-making time and allow people to function without constantly stopping to think about their next course of action.

However, there are both benefits and drawbacks of heuristics. While heuristics are helpful in many situations, they can also lead to  cognitive biases . Becoming aware of this might help you make better and more accurate decisions.

Press Play for Advice On Making Decisions

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares a simple way to make a tough decision. Click below to listen now.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

The History and Origins of Heuristics

Nobel-prize winning economist and cognitive psychologist Herbert Simon originally introduced the concept of heuristics in psychology in the 1950s. He suggested that while people strive to make rational choices, human judgment is subject to cognitive limitations. Purely rational decisions would involve weighing all the potential costs and possible benefits of every alternative.

But people are limited by the amount of time they have to make a choice as well as the amount of information they have at their disposal. Other factors such as overall intelligence and accuracy of perceptions also influence the decision-making process.

During the 1970s, psychologists Amos Tversky and Daniel Kahneman presented their research on cognitive biases. They proposed that these biases influence how people think and the judgments people make.

As a result of these limitations, we are forced to rely on mental shortcuts to help us make sense of the world. Simon's research demonstrated that humans were limited in their ability to make rational decisions, but it was Tversky and Kahneman's work that introduced the study of heuristics and the specific ways of thinking that people rely on to simplify the decision-making process.

How Heuristics Are Used

Heuristics play important roles in both  problem-solving  and  decision-making , as we often turn to these mental shortcuts when we need a quick solution.

Here are a few different theories from psychologists about why we rely on heuristics.

  • Attribute substitution : People substitute simpler but related questions in place of more complex and difficult questions.
  • Effort reduction : People use heuristics as a type of cognitive laziness to reduce the mental effort required to make choices and decisions.
  • Fast and frugal : People use heuristics because they can be fast and correct in certain contexts. Some theories argue that heuristics are actually more accurate than they are biased.

In order to cope with the tremendous amount of information we encounter and to speed up the decision-making process, our brains rely on these mental strategies to simplify things so we don't have to spend endless amounts of time analyzing every detail.

You probably make hundreds or even thousands of decisions every day. What should you have for breakfast? What should you wear today? Should you drive or take the bus? Fortunately, heuristics allow you to make such decisions with relative ease and without a great deal of agonizing.

There are many heuristics examples in everyday life. When trying to decide if you should drive or ride the bus to work, for instance, you might remember that there is road construction along the bus route. You realize that this might slow the bus and cause you to be late for work. So you leave earlier and drive to work on an alternate route.

Heuristics allow you to think through the possible outcomes quickly and arrive at a solution.

Are Heuristics Good or Bad?

Heuristics aren't inherently good or bad, but there are pros and cons to using them to make decisions. While they can help us figure out a solution to a problem faster, they can also lead to inaccurate judgments about other people or situations.

Types of Heuristics

There are many different kinds of heuristics. While each type plays a role in decision-making, they occur during different contexts. Understanding the types can help you better understand which one you are using and when.

Availability

The availability heuristic  involves making decisions based upon how easy it is to bring something to mind. When you are trying to make a decision, you might quickly remember a number of relevant examples. Since these are more readily available in your memory, you will likely judge these outcomes as being more common or frequently occurring.

For example, if you are thinking of flying and suddenly think of a number of recent airline accidents, you might feel like air travel is too dangerous and decide to travel by car instead. Because those examples of air disasters came to mind so easily, the availability heuristic leads you to think that plane crashes are more common than they really are.

Familiarity

The familiarity heuristic refers to how people tend to have more favorable opinions of things, people, or places they've experienced before as opposed to new ones. In fact, given two options, people may choose something they're more familiar with even if the new option provides more benefits.

Representativeness

The representativeness heuristic  involves making a decision by comparing the present situation to the most representative mental prototype. When you are trying to decide if someone is trustworthy, you might compare aspects of the individual to other mental examples you hold.

A soft-spoken older woman might remind you of your grandmother, so you might immediately assume that she is kind, gentle, and trustworthy. However, this is an example of a heuristic bias, as you can't know someone trustworthy based on their age alone.

The affect heuristic involves making choices that are influenced by the emotions that an individual is experiencing at that moment. For example, research has shown that people are more likely to see decisions as having benefits and lower risks when they are in a positive mood. Negative emotions, on the other hand, lead people to focus on the potential downsides of a decision rather than the possible benefits.

The anchoring bias involves the tendency to be overly influenced by the first bit of information we hear or learn. This can make it more difficult to consider other factors and lead to poor choices. For example, anchoring bias can influence how much you are willing to pay for something, causing you to jump at the first offer without shopping around for a better deal.

Scarcity is a principle in heuristics in which we view things that are scarce or less available to us as inherently more valuable. The scarcity heuristic is one often used by marketers to influence people to buy certain products. This is why you'll often see signs that advertise "limited time only" or that tell you to "get yours while supplies last."

Trial and Error

Trial and error is another type of heuristic in which people use a number of different strategies to solve something until they find what works. Examples of this type of heuristic are evident in everyday life. People use trial and error when they're playing video games, finding the fastest driving route to work, and learning to ride a bike (or learning any new skill).

Difference Between Heuristics and Algorithms

Though the terms are often confused, heuristics and algorithms are two distinct terms in psychology.

Algorithms are step-by-step instructions that lead to predictable, reliable outcomes; whereas heuristics are mental shortcuts that are basically best guesses. Algorithms always lead to accurate outcomes, whereas, heuristics do not.

Examples of algorithms include instructions for how to put together a piece of furniture or a recipe for cooking a certain dish. Health professionals also create algorithms or processes to follow in order to determine what type of treatment to use on a patient.

How Heuristics Can Lead to Bias

While heuristics can help us solve problems and speed up our decision-making process, they can introduce errors. As in the examples above, heuristics can lead to inaccurate judgments about how commonly things occur and about how representative certain things may be.

Just because something has worked in the past does not mean that it will work again, and relying on a heuristic can make it difficult to see alternative solutions or come up with new ideas.

Heuristics can also contribute to stereotypes and  prejudice . Because people use mental shortcuts to classify and categorize people, they often overlook more relevant information and create stereotyped categorizations that are not in tune with reality.

While heuristics can be a useful tool, there are ways you can improve your decision-making and avoid cognitive bias at the same time.

We are more likely to make an error in judgment if we are trying to make a decision quickly or are under pressure to do so. Whenever possible, take a few deep breaths . Do something to distract yourself from the decision at hand. When you return to it, you may find you have a fresh perspective, or notice something you didn't before.

Identify the Goal

We tend to focus automatically on what works for us and make decisions that serve our best interest. But take a moment to know what you're trying to achieve. Are there other people who will be affected by this decision? What's best for them? Is there a common goal that can be achieved that will serve all parties?

Process Your Emotions

Fast decision-making is often influenced by emotions from past experiences that bubble to the surface. Is your decision based on facts or emotions? While emotions can be helpful, they may affect decisions in a negative way if they prevent us from seeing the full picture.

Recognize All-or-Nothing Thinking

When making a decision, it's a common tendency to believe you have to pick a single, well-defined path, and there's no going back. In reality, this often isn't the case.

Sometimes there are compromises involving two choices, or a third or fourth option that we didn't even think of at first. Try to recognize the nuances and possibilities of all choices involved, instead of using all-or-nothing thinking .

Rachlin H. Rational thought and rational behavior: A review of bounded rationality: The adaptive toolbox . J Exp Anal Behav . 2003;79(3):409–412. doi:10.1901/jeab.2003.79-409

Shah AK, Oppenheimer DM. Heuristics made easy: An effort-reduction framework . Psychol Bull. 2008;134(2):207-22. doi:10.1037/0033-2909.134.2.207

Marewski JN, Gigerenzer G. Heuristic decision making in medicine .  Dialogues Clin Neurosci . 2012;14(1):77–89. PMID: 22577307

Schwikert SR, Curran T. Familiarity and recollection in heuristic decision making .  J Exp Psychol Gen . 2014;143(6):2341-2365. doi:10.1037/xge0000024

Finucane M, Alhakami A, Slovic P, Johnson S. The affect heuristic in judgments of risks and benefits . J Behav Decis Mak . 2000; 13(1):1-17. doi:10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S

Cheung TT, Kroese FM, Fennis BM, De Ridder DT. Put a limit on it: The protective effects of scarcity heuristics when self-control is low . Health Psychol Open . 2015;2(2):2055102915615046. doi:10.1177/2055102915615046

Mohr H, Zwosta K, Markovic D, Bitzer S, Wolfensteller U, Ruge H. Deterministic response strategies in a trial-and-error learning task . Inman C, ed. PLoS Comput Biol. 2018;14(11):e1006621. doi:10.1371/journal.pcbi.1006621

Lang JM, Ford JD, Fitzgerald MM.  An algorithm for determining use of trauma-focused cognitive-behavioral therapy .  Psychotherapy   (Chic) . 2010;47(4):554-69. doi:10.1037/a0021184

Bigler RS, Clark C. The inherence heuristic: A key theoretical addition to understanding social stereotyping and prejudice. Behav Brain Sci . 2014;37(5):483-4. doi:10.1017/S0140525X1300366X

del Campo C, Pauser S, Steiner E, et al.  Decision making styles and the use of heuristics in decision making .  J Bus Econ.  2016;86:389–412. doi:10.1007/s11573-016-0811-y

Marewski JN, Gigerenzer G. Heuristic decision making in medicine .  Dialogues Clin Neurosci . 2012;14(1):77-89. doi:10.31887/DCNS.2012.14.1/jmarewski

Zheng Y, Yang Z, Jin C, Qi Y, Liu X. The influence of emotion on fairness-related decision making: A critical review of theories and evidence .  Front Psychol . 2017;8:1592. doi:10.3389/fpsyg.2017.01592

Bazerman MH. Judgment and decision making. In: Biswas-Diener R, Diener E, eds.,  Noba Textbook Series: Psychology.  DEF Publishers.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Heuristics: Definition, Examples, And How They Work

Benjamin Frimodig

Science Expert

B.A., History and Science, Harvard University

Ben Frimodig is a 2021 graduate of Harvard College, where he studied the History of Science.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Every day our brains must process and respond to thousands of problems, both large and small, at a moment’s notice. It might even be overwhelming to consider the sheer volume of complex problems we regularly face in need of a quick solution.

While one might wish there was time to methodically and thoughtfully evaluate the fine details of our everyday tasks, the cognitive demands of daily life often make such processing logistically impossible.

Therefore, the brain must develop reliable shortcuts to keep up with the stimulus-rich environments we inhabit. Psychologists refer to these efficient problem-solving techniques as heuristics.

Heuristics decisions and mental thinking shortcut approach outline diagram. Everyday vs complex technique comparison list for judgments and fast, short term problem solving method vector

Heuristics can be thought of as general cognitive frameworks humans rely on regularly to reach a solution quickly.

For example, if a student needs to decide what subject she will study at university, her intuition will likely be drawn toward the path that she envisions as most satisfying, practical, and interesting.

She may also think back on her strengths and weaknesses in secondary school or perhaps even write out a pros and cons list to facilitate her choice.

It’s important to note that these heuristics broadly apply to everyday problems, produce sound solutions, and helps simplify otherwise complicated mental tasks. These are the three defining features of a heuristic.

While the concept of heuristics dates back to Ancient Greece (the term is derived from the Greek word for “to discover”), most of the information known today on the subject comes from prominent twentieth-century social scientists.

Herbert Simon’s study of a notion he called “bounded rationality” focused on decision-making under restrictive cognitive conditions, such as limited time and information.

This concept of optimizing an inherently imperfect analysis frames the contemporary study of heuristics and leads many to credit Simon as a foundational figure in the field.

Kahneman’s Theory of Decision Making

The immense contributions of psychologist Daniel Kahneman to our understanding of cognitive problem-solving deserve special attention.

As context for his theory, Kahneman put forward the estimate that an individual makes around 35,000 decisions each day! To reach these resolutions, the mind relies on either “fast” or “slow” thinking.

Kahneman

The fast thinking pathway (system 1) operates mostly unconsciously and aims to reach reliable decisions with as minimal cognitive strain as possible.

While system 1 relies on broad observations and quick evaluative techniques (heuristics!), system 2 (slow thinking) requires conscious, continuous attention to carefully assess the details of a given problem and logically reach a solution.

Given the sheer volume of daily decisions, it’s no surprise that around 98% of problem-solving uses system 1.

Thus, it is crucial that the human mind develops a toolbox of effective, efficient heuristics to support this fast-thinking pathway.

Heuristics vs. Algorithms

Those who’ve studied the psychology of decision-making might notice similarities between heuristics and algorithms. However, remember that these are two distinct modes of cognition.

Heuristics are methods or strategies which often lead to problem solutions but are not guaranteed to succeed.

They can be distinguished from algorithms, which are methods or procedures that will always produce a solution sooner or later.

An algorithm is a step-by-step procedure that can be reliably used to solve a specific problem. While the concept of an algorithm is most commonly used in reference to technology and mathematics, our brains rely on algorithms every day to resolve issues (Kahneman, 2011).

The important thing to remember is that algorithms are a set of mental instructions unique to specific situations, while heuristics are general rules of thumb that can help the mind process and overcome various obstacles.

For example, if you are thoughtfully reading every line of this article, you are using an algorithm.

On the other hand, if you are quickly skimming each section for important information or perhaps focusing only on sections you don’t already understand, you are using a heuristic!

Why Heuristics Are Used

Heuristics usually occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind at the same moment

When studying heuristics, keep in mind both the benefits and unavoidable drawbacks of their application. The ubiquity of these techniques in human society makes such weaknesses especially worthy of evaluation.

More specifically, in expediting decision-making processes, heuristics also predispose us to a number of cognitive biases .

A cognitive bias is an incorrect but pervasive judgment derived from an illogical pattern of cognition. In simple terms, a cognitive bias occurs when one internalizes a subjective perception as a reliable and objective truth.

Heuristics are reliable but imperfect; In the application of broad decision-making “shortcuts” to guide one’s response to specific situations, occasional errors are both inevitable and have the potential to catalyze persistent mistakes.

For example, consider the risks of faulty applications of the representative heuristic discussed above. While the technique encourages one to assign situations into broad categories based on superficial characteristics and one’s past experiences for the sake of cognitive expediency, such thinking is also the basis of stereotypes and discrimination.

In practice, these errors result in the disproportionate favoring of one group and/or the oppression of other groups within a given society.

Indeed, the most impactful research relating to heuristics often centers on the connection between them and systematic discrimination.

The tradeoff between thoughtful rationality and cognitive efficiency encompasses both the benefits and pitfalls of heuristics and represents a foundational concept in psychological research.

When learning about heuristics, keep in mind their relevance to all areas of human interaction. After all, the study of social psychology is intrinsically interdisciplinary.

Many of the most important studies on heuristics relate to flawed decision-making processes in high-stakes fields like law, medicine, and politics.

Researchers often draw on a distinct set of already established heuristics in their analysis. While dozens of unique heuristics have been observed, brief descriptions of those most central to the field are included below:

Availability Heuristic

The availability heuristic describes the tendency to make choices based on information that comes to mind readily.

For example, children of divorced parents are more likely to have pessimistic views towards marriage as adults.

Of important note, this heuristic can also involve assigning more importance to more recently learned information, largely due to the easier recall of such information.

Representativeness Heuristic

This technique allows one to quickly assign probabilities to and predict the outcome of new scenarios using psychological prototypes derived from past experiences.

For example, juries are less likely to convict individuals who are well-groomed and wearing formal attire (under the assumption that stylish, well-kempt individuals typically do not commit crimes).

This is one of the most studied heuristics by social psychologists for its relevance to the development of stereotypes.

Scarcity Heuristic

This method of decision-making is predicated on the perception of less abundant, rarer items as inherently more valuable than more abundant items.

We rely on the scarcity heuristic when we must make a fast selection with incomplete information. For example, a student deciding between two universities may be drawn toward the option with the lower acceptance rate, assuming that this exclusivity indicates a more desirable experience.

The concept of scarcity is central to behavioral economists’ study of consumer behavior (a field that evaluates economics through the lens of human psychology).

Trial and Error

This is the most basic and perhaps frequently cited heuristic. Trial and error can be used to solve a problem that possesses a discrete number of possible solutions and involves simply attempting each possible option until the correct solution is identified.

For example, if an individual was putting together a jigsaw puzzle, he or she would try multiple pieces until locating a proper fit.

This technique is commonly taught in introductory psychology courses due to its simple representation of the central purpose of heuristics: the use of reliable problem-solving frameworks to reduce cognitive load.

Anchoring and Adjustment Heuristic

Anchoring refers to the tendency to formulate expectations relating to new scenarios relative to an already ingrained piece of information.

 Anchoring Bias Example

Put simply, this anchoring one to form reasonable estimations around uncertainties. For example, if asked to estimate the number of days in a year on Mars, many people would first call to mind the fact the Earth’s year is 365 days (the “anchor”) and adjust accordingly.

This tendency can also help explain the observation that ingrained information often hinders the learning of new information, a concept known as retroactive inhibition.

Familiarity Heuristic

This technique can be used to guide actions in cognitively demanding situations by simply reverting to previous behaviors successfully utilized under similar circumstances.

The familiarity heuristic is most useful in unfamiliar, stressful environments.

For example, a job seeker might recall behavioral standards in other high-stakes situations from her past (perhaps an important presentation at university) to guide her behavior in a job interview.

Many psychologists interpret this technique as a slightly more specific variation of the availability heuristic.

How to Make Better Decisions

Heuristics are ingrained cognitive processes utilized by all humans and can lead to various biases.

Both of these statements are established facts. However, this does not mean that the biases that heuristics produce are unavoidable. As the wide-ranging impacts of such biases on societal institutions have become a popular research topic, psychologists have emphasized techniques for reaching more sound, thoughtful and fair decisions in our daily lives.

Ironically, many of these techniques are themselves heuristics!

To focus on the key details of a given problem, one might create a mental list of explicit goals and values. To clearly identify the impacts of choice, one should imagine its impacts one year in the future and from the perspective of all parties involved.

Most importantly, one must gain a mindful understanding of the problem-solving techniques used by our minds and the common mistakes that result. Mindfulness of these flawed yet persistent pathways allows one to quickly identify and remedy the biases (or otherwise flawed thinking) they tend to create!

Further Information

  • Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological bulletin, 134(2), 207.
  • Marewski, J. N., & Gigerenzer, G. (2012). Heuristic decision making in medicine. Dialogues in clinical neuroscience, 14(1), 77.
  • Del Campo, C., Pauser, S., Steiner, E., & Vetschera, R. (2016). Decision making styles and the use of heuristics in decision making. Journal of Business Economics, 86(4), 389-412.

What is a heuristic in psychology?

A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

Bobadilla-Suarez, S., & Love, B. C. (2017, May 29). Fast or Frugal, but Not Both: Decision Heuristics Under Time Pressure. Journal of Experimental Psychology: Learning, Memory, and Cognition .

Bowes, S. M., Ammirati, R. J., Costello, T. H., Basterfield, C., & Lilienfeld, S. O. (2020). Cognitive biases, heuristics, and logical fallacies in clinical practice: A brief field guide for practicing clinicians and supervisors. Professional Psychology: Research and Practice, 51 (5), 435–445.

Dietrich, C. (2010). “Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes.” Inquiries Journal/Student Pulse, 2(02).

Groenewegen, A. (2021, September 1). Kahneman Fast and slow thinking: System 1 and 2 explained by Sue. SUE Behavioral Design. Retrieved March 26, 2022, from https://suebehaviouraldesign.com/kahneman-fast-slow-thinking/

Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make that big decision .

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.

Simon, H.A., 1956. Rational choice and the structure of the environment. Psychological Review .

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185 (4157), 1124–1131.

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research bias
  • What Is the Affect Heuristic? | Example & Definition

What Is the Affect Heuristic? | Example & Definition

Published on December 28, 2022 by Kassiani Nikolopoulou . Revised on November 3, 2023.

The affect heuristic occurs when our current emotional state or mood influences our decisions. Instead of evaluating the situation objectively, we rely on our “gut feelings” and respond according to how we feel. As a result, the affect heuristic can lead to suboptimal decision-making.

You were very excited about the opportunity, and now you feel disheartened.

Table of contents

What is the affect heuristic, why does the affect heuristic occur, when is the affect heuristic a problem, affect heuristic example, how to avoid the affect heuristic, other types of research bias, frequently asked questions.

The affect heuristic is a type of cognitive bias that plays a role in decision-making. Instead of using objective information, we rely upon our emotions to evaluate a situation. This can also serve as a shortcut to solve a problem quickly. Here, affect can be viewed as:

  •  a feeling state that people experience, such as happiness or sadness.
  •  a quality associated with a stimulus, or anything that can trigger us to act, such as sounds, words, or temperature changes.

When people need to make a choice under time pressure, they are likely to feel the need to be efficient, or to simply go with what seems the best option. This leads them to rely on heuristics or mental shortcuts. The affect heuristic causes us to consult our emotions and feelings when we need to form a judgment but lack the information or time to reflect more deeply.

The affect heuristic

  • If you used to skate as a kid and have many positive memories, you might feel that the benefit (fun) outweighs any risks (falling). Therefore, you might be more inclined to try it again.
  • On the other hand, if you fell and broke your arm skating as a kid, you most likely associate skating with danger, and feel that it’s a bad idea.

More specifically, the affect heuristic impacts our decision-making by influencing how we perceive risks and benefits related to an action. In other words, when we like an activity, we tend to judge its risk as low, and its benefit as high.

The opposite is true when we dislike something. Here, we tend to judge its risk as high and its benefit as low. In this way, how we feel about something directs our judgment of risk and benefit. This, in turn, motivates our behavior.

Similarly, our mood can influence our decisions. When we are in a good mood, we tend to be optimistic about decisions and focus more on the benefits. When we are in a bad mood, we focus more on the risks and the perceived lack of benefits related to a decision.

The affect heuristic occurs due to emotional or affective reactions to a stimulus. These are often the very first reactions we have. They occur automatically and rapidly, influencing how we process and evaluate information. For example, you can probably sense the different feelings associated with the word “love” as opposed to the word “hate.”

When we subconsciously let these feelings guide our decisions, we rely on the affect heuristic. This is because we perceive reality in two fundamentally different ways or systems. Various names are used to describe them:

  • One is often labeled as intuitive, automatic, and experiential. 
  • The other is labeled as analytical, verbal , and rational. 

While the rational way of comprehending reality relies on logic and evidence, the experiential one relies on feelings we’ve come to associate with certain things. Through the experiential system, we store events or concepts in our minds, “tagging” them with positive or negative feelings. When faced with a decision, we consult our “pool”, containing all the positive and negative tags. These then serve as cues for our judgment.

Although deeper analysis is certainly important in some decision-making contexts, using our emotions is a quicker, easier, and more efficient way to navigate a complex, uncertain, or sometimes even dangerous world.

Although the affect heuristic allows us to make decisions quickly and efficiently (similarly to the availability heuristic or anchoring bias ), it can also deceive us. There are two important ways that the affect heuristic can lead us astray:

  • One occurs when others try to manipulate our emotions in an attempt to affect or control our behavior. For example, politicians often appeal to fear in order to make the public feel that the country will suffer dire consequences if they aren’t elected or certain policies aren’t implemented.
  • The other results from the natural limitations of the experiential system. For instance, we can’t find the correct answer to a math problem by relying on our feelings. Besides, if it was always enough to follow our intuition, there would be no need for the rational/analytic system of thinking.

The affect heuristic is a possible explanation for a range of purchase decisions, such as buying insurance.

The scenarios involved one of the following:

  • An antique clock that no longer works and can’t be repaired. However, it has sentimental value: it was a gift to you from your grandparents on your 5th birthday. You learned how to tell time from it, and have always loved it very much.
  • An antique clock that no longer works and can’t be repaired. It does not have much sentimental value to you. It was a gift from a remote relative on your 5th birthday. You didn’t like it very much then, and you still don’t have any particular feelings towards it now.

Both groups of participants were then asked to indicate the maximum amount they were willing to pay for insurance against loss in a shipment to a new city. In the event of loss, the insurance paid $100 in both cases.

Due to the affect heuristic, how people feel about something drives their judgment. Communicators, such as public relations professionals, know this and can use it to influence our opinions.

By using terms like “smart bombs” and “peacekeeper missiles” for nuclear weapons and “excursions” for reactor accidents, proponents of nuclear energy downplay the risks of nuclear applications and highlight their benefits. Although not without resistance, they attempt to frame nuclear concepts in neutral or positive ways using this language. As a result, the public attaches a neutral or positive sentiment to the technology, leading to a framing effect .

The affect heuristic is a helpful shortcut, but it can also cloud our judgment. Here are a few steps you can take to minimize the negative impact of the affect heuristic:

  • Acknowledge that emotions can influence our decisions, no matter how rational we think we are. This is especially true when we lack the information or time to think things through.
  • Slow down your thinking process if possible. Instead of making a snap judgment, take the time to analyze all the information at hand and consider all the options before reaching your conclusion.
  • Avoid making an important decision when your emotions are running high. Regardless of whether the emotion is positive or negative, try to delay decision-making until you are in a “regular” state of mind.

Cognitive bias

  • Confirmation bias
  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Affect heuristic
  • Anchoring heuristic

Selection bias

  • Sampling bias
  • Ascertainment bias
  • Attrition bias
  • Self-selection bias
  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias
  • Hawthorne effect
  • Observer bias
  • Omitted variable bias
  • Publication bias
  • Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Placebo effect
  • Actor-observer bias
  • Ceiling effect
  • Ecological fallacy

When customers are asked if they want to extend the warranty for a laptop they’ve just bought, few of them seriously think about relevant factors (e.g., the probability that the laptop will be damaged or the likely cost of repair).

Most people rely on the affect heuristic : the more they cherish their new laptop, the more willing they are to pay for an extended warranty.

Even though the affect heuristic and the availability heuristic are different, they are closely linked. This is because availability occurs not only through ease of recall or imaginability, but because remembered and imagined images come tagged with affect or emotion.

For example, availability can explain why people overestimate certain highly publicized causes of death like accidents, homicides, or tornadoes and underestimate others, such as diabetes, asthma, or stroke. The highly publicized ones are more emotionally charged and, thus, more likely to receive attention.

In other words, the affect heuristic is essentially a type of availability mechanism in which emotionally charged events quickly spring to mind.

Affect in psychology is any experience of feeling, emotion, or mood. It is often described as positive or negative. Affect colors how we see the world and how we feel about people, objects, and events.

Because of this, it can also impact our social interactions, behaviors, and judgments. For example, we often make decisions based on our “gut feeling.” When we do this, we rely on what is called the affect heuristic .

Sources in this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

Nikolopoulou, K. (2023, November 03). What Is the Affect Heuristic? | Example & Definition. Scribbr. Retrieved April 1, 2024, from https://www.scribbr.com/research-bias/affect-heuristic/
Finucane, M. L., Alhakami, A. S., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making , 13 (1), 1–17. https://doi.org/10.1002/(sici)1099-0771(200001/03)13:1
Hsee, C. K. (2006). The Affection Effect in Insurance Decisions. Journal of Risk and Uncertainty , 20 (2). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=930041
Skagerlund, K., Forsblad, M., Slovic, P., & Västfjäll, D. (2020). The Affect Heuristic and Risk Perception – Stability Across Elicitation Methods and Individual Cognitive Abilities. Frontiers in Psychology , 11 . https://doi.org/10.3389/fpsyg.2020.00970

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, the availability heuristic | example & definition, what is cognitive bias | definition, types, & examples, what is the framing effect | definition & examples.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

The Recognition Heuristic: A Review of Theory and Tests

Thorsten pachur.

1 Department of Psychology, University of Basel, Basel, Switzerland

Peter M. Todd

2 Cognitive Science Program, Indiana University, Bloomington, IN, USA

Gerd Gigerenzer

3 Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin, Germany

Lael J. Schooler

Daniel g. goldstein.

4 London Business School, London, UK

The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect – the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference).

Introduction

In Sir Arthur Conan Doyle's The Final Problem , Sherlock Holmes finally faces his arch enemy, Professor Moriarty. In describing Moriarty to Watson, Holmes asks Watson, “You have probably never heard of Professor Moriarty?” “Never.” “Aye, there's the genius and the wonder of the thing! The man pervades London, and no one has heard of him. That's what puts him on a pinnacle in the records of crime.” Holmes thus implies that extraordinary things usually cannot avoid being heard of and thus being recognized by many people. It would be much less surprising for Watson to have never heard of Moriarty if Moriarty were insignificant. In other words, recognition may often (but not always) be useful because it can tell us something about the objects in question. Specifically, if we have heard of one object but not another, this can be an indication that the objects differ in other respects as well. Recognition would then allow us to make inferences about these other characteristics. To illustrate, imagine a culturally interested American tourist who, when planning her visit to Germany, needs to make a quick guess whether Heidelberg or Erlangen has more museums. Having heard of Heidelberg but not Erlangen, she could exploit her partial ignorance to make the (correct) inference that Heidelberg has more.

One strategy that uses recognition to make inferences from memory about the environment is what Goldstein and Gigerenzer ( 1999 , 2002 ) called the recognition heuristic . For two-alternative choice tasks, where one has to decide which of two objects scores higher on a criterion, the heuristic can be stated as follows:

If one object is recognized, but not the other, then infer that the recognized object has a higher value on the criterion .

In situations where the recognition heuristic exploits the presence of a particular information structure – namely, that recognition knowledge about natural environments is often systematic rather than random – to make good decisions, the recognition heuristic is ecologically rational , exemplifying Herbert Simon's vision of rationality as resulting from the close fit between the mind and the environment (Simon, 1990 ). One condition that should govern whether this strategy will be used is therefore whether the environment is appropriately structured (meaning, as we will define later, that there is a high recognition validity). When the environment is not appropriate for using the recognition heuristic, decision makers may ignore recognition, oppose recognition, or factor in information beyond recognition, as discussed later in this article.

The exploitable relation between subjective recognition and some (not directly accessible) criterion results from a process by which the criterion influences object recognition through mediators, such as mentions in newspapers, on the Internet, on radio, on television, or by word of mouth. Specifically, objects with high criterion values tend to be mentioned more frequently in the news, frequent mentions increase the likelihood that their name will be recognized, and as a consequence, recognition becomes correlated with high criterion values (for empirical support, see Goldstein and Gigerenzer, 2002 ; Pachur and Hertwig, 2006 ; Pachur and Biele, 2007 ; Scheibehenne and Bröder, 2007 ).

Our goal in this article is to summarize and to connect research on the recognition heuristic since Goldstein and Gigerenzer ( 1999 , 2002 ) first specified it, carve out from empirical tests of the recognition heuristic boundary conditions of its use, and point to novel questions that have emerged from this research. We start by describing and clarifying the basic characteristics and assumptions of the heuristic. For this purpose, we trace how the notion of the heuristic developed, and we locate recognition knowledge in relation to other knowledge about previous encounters with an object, such as the context of previous encounters, their frequency, and their ease of retrieval from memory (i.e., their fluency). Then we provide an overview of empirical evidence on two important issues: in what environments is the recognition heuristic ecologically rational? And do people follow recognition in these environments? We then review evidence for two bold prediction of the recognition heuristic: first, that when recognition knowledge discriminates between two objects, further cues are ignored; and second, that recognizing fewer objects can lead to higher inferential accuracy (the less-is-more effect). We close with a discussion of recent connections of the recognition heuristic with memory models and highlight relations to other judgment phenomena influenced by a previous encounter with an object.

The Foundations and Implications of the Recognition Heuristic

The non-compensatory use of recognition.

The recognition heuristic makes a strong claim. It assumes that if people recognize one object but not the other, and there is a substantial recognition validity, recognition is used in a non-compensatory fashion – that is, no other cues can reverse the judgment indicated by recognition (as elaborated below, the heuristic does not apply to situations in which people already have conclusive criterion knowledge about the objects, which allows a response to be deduced). To appreciate this claim, let us trace the development of the notion of the recognition heuristic. In an early article that can be considered the basis for the fast-and-frugal heuristics program, Gigerenzer et al. ( 1991 ) discussed the potential role of recognition in making bets about unknown properties of the environment. When facing a task in which one has to decide which of two objects scores higher on some criterion (e.g., which of two soccer coaches has been more successful in the past), Gigerenzer et al. ( 1991 ) proposed that people first try to solve the problem by building and using a local mental model . A local mental model can be successfully constructed if (a) precise criterion values can be retrieved from memory for both objects, (b) intervals of possible criterion values for the two objects can be retrieved that do not overlap, or (c) elementary logical operations can compensate for missing knowledge. If no such local mental model can be constructed, people construct from declarative knowledge a probabilistic mental model (PMM). Such a model consists of probabilistic cues, that is, facts about an object that are correlated with the criterion for a clearly defined set of objects. In other words, a PMM connects the specific structure of the task with the probability structure of a corresponding natural environment (stored in long-term memory) and uses the probabilistic cues to solve the problem by inductive inference. Subjective recognition of an object (which Gigerenzer et al. referred to as the “familiarity cue”) was held to be one such cue.

While Gigerenzer et al. ( 1991 ) assumed that recognition functions similarly to objective cues (e.g., the cue that a city has an international airport), this view was later revised. Gigerenzer and Goldstein ( 1996 ) put forth the thesis that recognition holds a special status, because if an object is not recognized, it is not possible to retrieve cue values for that object from memory, and in this sense recognition precedes retrieval of cues. Recognition therefore serves as an initial screening step (if it correlates with the criterion, as used in the take-the-best heuristic and others) that precedes the search for further cue information. The thesis that recognition gives rise to non-compensatory processing was given prominence by Goldstein and Gigerenzer ( 2002 ), who described the recognition heuristic as follows: “the recognition heuristic is a non-compensatory strategy: if one object is recognized and the other is not, then the inference is determined; no other information can reverse the choice determined by recognition” (p. 82). “Information” here means cue values, not criterion values; in contrast, when a solution can be derived from criterion knowledge, local mental models can be applied, and the recognition heuristic does not come into play. For this reason, Goldstein and Gigerenzer ( 2002 ) did not even discuss local mental models, because their focus was on uncertain inferences as made by the recognition heuristic.

How could such a mechanism that bases a decision solely on recognition and ignores other cue knowledge make good inferences? First, recognition seems to have a retrieval primacy compared to other cue knowledge (Pachur and Hertwig, 2006 ). Recognition information is available to make an inference earlier than other information and enables one to make a quick and effortless decision, which is clearly beneficial when time is of the essence. Second, in some situations, information beyond recognition does not allow one to discriminate between options. For instance, customers are often unable to distinguish the taste of different beers or other products once the labels have been removed (e.g., Allison and Uhl, 1964 ). As a consequence, information beyond name recognition, which would take more time and effort to gather and process, may sometimes simply be useless. Third, it has been shown that the non-compensatory use of recognition can lead to more accurate inferences than mechanisms that integrate recognition with further cues (Gigerenzer and Goldstein, 1996 ). One reason for this is that if what is known about a recognized object is a set of negative cue values, this can lead to the object's unjustified rejection (recall that usually no knowledge can be retrieved for unrecognized objects). In a mathematical analysis of the recognition heuristic, Davis-Stober et al. ( 2010 ) identified conditions under which relying on one single cue (e.g., recognition) while ignoring further cue knowledge is actually the optimal strategy. The authors showed that as long as the cue is correlated with other cues, ignoring this further knowledge minimizes the maximal deviation from the cue weighting scheme that could be derived under perfect knowledge of the environment. Importantly, this result does not depend on the one cue being the most valid one.

Fourth, in important decision tasks during our evolutionary past, searching for information beyond recognition, even if it could be useful, may often have been dangerous. Take, for instance, foraging for food. The cost of being poisoned by sampling from unrecognized mushrooms was probably considerably higher than the cost of rejecting an unrecognized but harmless mushroom. As a consequence, an avoidance of searching for information beyond recognition could have evolved in some domains (Bullock and Todd, 1999 ). And some animals indeed often seem to choose food based on recognition and ignore other, potentially relevant information. For instance, Galef et al. ( 1990 ) observed that Norway rats preferred food they recognized from smelling other rats’ breath over food they did not recognize, irrespective of whether the other rat was ill (see Noble et al., 2001 , for a model of how this ignoring of further information may have evolved).

As we will discuss in greater detail below, the proposal that recognition is used in a non-compensatory fashion has led to considerable protest in the literature, with the argument that such a model would be too simple to capture human decision making. Some of this protest was due to misunderstandings of the term “non-compensatory.” Although Goldstein and Gigerenzer ( 2002 ) referred in their use of the term to the judgment and decision making literature (which precisely define what it means to say that a strategy is non-compensatory), the nuances of this definition were not clear to all readers [including one of the authors of this article (Lael J. Schooler)]. Oppenheimer ( 2003 ), for instance, argued that because people seem to make judgments against recognition when they have criterion knowledge contradicting it, the recognition heuristic is not descriptive of how people make decisions. Yet the term “compensatory” refers to a trade-off between cue values, not between criterion values and cues such as recognition (Gigerenzer and Goldstein, 2011 ). Moreover, as mentioned before, having conclusive criterion knowledge is not a situation in which the recognition heuristic or any other inductive strategy is supposed to be used in the first place. In addition, Goldstein and Gigerenzer modeled inferences from memory in which unrecognized objects have unknown cue values. Cases in which objects are unrecognized but cue values are known (e.g., inspecting a new product in a grocery store) were not in the domain of the heuristic.

Moreover, the protest against the notion of a non-compensatory use of recognition may appear surprising, given that non-compensatory choices are commonly observed. As the authors of one classic review of 45 process studies put it, “the results firmly demonstrate that non-compensatory strategies were the dominant mode used by decision makers” (Ford et al., 1989 , p. 75). Perhaps more striking is that the predictions of another memory-based heuristic, availability (Tversky and Kahneman, 1973 ), are also non-compensatory (based on just a single variable, e.g., speed of recall), but this seems to have bothered no one.

Adaptive use of the recognition heuristic

Gigerenzer et al. ( 1999 ) assumed that the recognition heuristic is one of a set of strategies – the adaptive toolbox – that decision makers have at their disposal. One of the conditions in which the recognition heuristic should be applied is when recognition is (strongly) correlated with the criterion. Conversely, when recognition is only a poor cue, the recognition heuristic should not be used (at least, if a better strategy exists). To quantify the accuracy achievable by using the recognition heuristic to make criterion comparisons among a class of objects (e.g., comparing the populations of Swedish cities), Goldstein and Gigerenzer ( 2002 ) proposed the recognition validity α. It is calculated as

where R and W equal the number of correct (right) and incorrect (wrong) inferences, respectively, that are made on all object pairs when one object is recognized and the other is not and the recognized object is judged to have the higher criterion value. The validity of object knowledge beyond recognition, which can be used to make a decision when both objects are recognized, knowledge validity β, is defined as the proportion of correct inferences among the cases where both objects are recognized.

The recognition and knowledge validities are defined relative to a reference class (Brunswik, 1943 ; Gigerenzer et al., 1991 ), which clearly specifies the population of objects that are to be judged (e.g., predicting the outcome of tennis matches at a Grand Slam tournament, or comparing the population sizes of the 50 largest British cities). To be able to make a reasonable prediction of whether people will use the recognition heuristic in a particular judgment task, it is necessary to know the reference class from which participants think the objects are drawn. Without a reference class, the recognition validity is not defined, and it is unclear how in this situation people might choose to use or suspend the heuristic. (The question of how of the recognition heuristic is selected is discussed in Low Recognition Validity and Discrediting Source Knowledge; see also Pachur and Hertwig, 2006 ).

The less-is-more effect

The recognition heuristic can lead to a surprising phenomenon, in which less knowledge can lead to more accurate decisions. How is this possible? When no objects are recognized (and no other information can be gleaned from the name or image), a decision maker comparing all possible pairs of the objects can only guess which object has the greater criterion value. With an increasing number of recognized objects, there will be more and more pairs in which only one object is recognized, but also more cases in which both objects are recognized. The proportion of pairs with only one recognized object is highest when half of the objects are recognized and decreases again thereafter as a majority of objects are recognized. Now, under certain conditions, the expected accuracy of all resulting decisions (those made both with and without recognition) reaches a maximum when more than half, but fewer than all objects are recognized. When all objects are recognized, all choices have to be made based on knowledge beyond recognition, if available (because in this case the recognition heuristic is no longer applicable). Under those conditions, the accuracy of choices when all objects are recognized is lower than when at least some objects are not recognized and decision makers can benefit from the recognition heuristic's greater accuracy in this environment.

What are these conditions under which less (knowledge) can be more (accurate)? Examining the recognition heuristic analytically, Goldstein and Gigerenzer ( 2002 ) showed that a less-is-more effect will emerge in a comparison task if (but not only if) the recognition validity (α) is higher than the knowledge validity (β), under the assumption that the validities are constant across different levels of the number of recognized objects, n (although they showed in computer simulations that the less-is-more effect can also occur when α is not constant). More recently, Pachur ( 2010 ) highlighted that the less-is-more effect is strongly reduced if people who recognize more objects also have a higher knowledge validity, that is, if n and the knowledge validity are positively correlated (see also Smithson, 2010 ). Finally, the less-is-more effect is also influenced by the quality of recognition memory. Specifically, if recognition memory is imperfect (i.e., if recognition does not always correctly indicate whether an object has actually been encountered or not) a less-is-more effect can occur even if the recognition validity is not higher than the knowledge validity (Katsikopoulos, 2010 ).

Information about previous encounters: what recognition is and isn't

The recognition heuristic uses information about previous encounters with an object. There are multiple dimensions of information about such encounters that can be stored (e.g., frequency, context knowledge), and even characteristics of the process of retrieving this information can be exploited for an inference (e.g., the time required to recognize an object; Schooler and Hertwig, 2005 ). The recognition heuristic uses only one of these various types of information: belief regarding whether or not an encounter occurred. But the term “recognition” has been applied in the literature to conceptually rather different things. Therefore, it is useful to clearly distinguish the information that the recognition heuristic employs from other forms of information about object encounters, and our intended meaning of the term recognition from other meanings.

First, “recognition” as Goldstein and Gigerenzer ( 2002 ) used it refers to the distinction “between the truly novel and the previously experienced” (p. 77). It thus differs from episodic recognition , which is commonly studied in research on recognition memory (cf. Pachur et al., 2009 ). In a typical recognition memory experiment, participants first study a list of items (usually existing words such as chair ) and are later asked to go through a new list composed of previously studied plus unstudied items and pick out the ones that were on the original list. In other words, in these experiments typically none of the items are actually novel, because they are commonly used words. Therefore, the “mere” (or semantic ) recognition that the recognition heuristic employs is insufficient to identify the correct items in this task, and knowledge about the context (i.e., episodic knowledge) in which the previously studied items were originally presented is required. The recognition heuristic does not require such episodic knowledge, because semantic recognition alone differentiates novel from previously encountered objects. Note that with novel objects, in this conception, no further cue knowledge can be available. Moreover, recognition in Goldstein and Gigerenzer's sense is not independent of a reference class. A German participant may know that she has heard of Paris, France but not Paris, Tennessee (population 10,000), and not treat Paris as recognized on a test of US cities. In addition to recognition being sensitive to a person's conception of the reference class, recognition validity, and even the decision to apply the recognition heuristic hinge on the reference class as well (see below).

A second important distinction is between (semantic) recognition and frequency information, that is, knowledge about the number of times an object has been encountered in the past (e.g., Hintzman and Curran, 1994 ). The recognition heuristic does not distinguish between objects one has encountered 10 times and those encountered 60 times (as long as both are recognized or unrecognized). This is one element that makes the recognition heuristic different from the availability heuristic (Tversky and Kahneman, 1973 ), which makes use of ease of retrieval or the quantity of recalled items (for a discussion of the different notions of availability see Hertwig et al., 2005 ). To make an inference, one version of the availability heuristic retrieves instances of the target event categories, such as the number of people one knows who have cancer compared to the number of people who have suffered from a stroke (Hertwig et al., 2005 ). The recognition heuristic, by contrast, bases an inference simply on the ability (or lack thereof) to recognize the names of the event categories.

A recognition assessment, which feeds into the recognition heuristic, unfolds over time. The speed with which this recognition assessment is made – fluency – can itself be informative and can be used to infer other facts, for instance, how frequently an object has been encountered in the past 1 . The recognition heuristic does not draw on fluency information and only considers whether an object is recognized or not. The notion of inferences based on recognition speed, however, has been elaborated in the fluency heuristic (Schooler and Hertwig, 2005 ), which uses recognition speed to distinguish between two recognized objects (i.e., where the recognition heuristic does not apply) and lends computational precision to a long tradition of research on fluency (e.g., Jacoby and Dallas, 1981 ; Kelley and Jacoby, 1998 ; Oppenheimer, 2008 ).

Finally, collective recognition – the proportion of people in some population who recognize an object – has been used to examine the ecological rationality of the recognition heuristic. Collective recognition has been found to be correlated with environmental quantities such as stock profitability (Borges et al., 1999 ; Ortmann et al., 2008 ) and sports success (Serwe and Frings, 2006 ; Pachur and Biele, 2007 ). Nevertheless, these tests are not direct implementations of the recognition heuristic, which models the use of individual recognition. Of course an individual could use collective recognition information (assuming he or she knows it) to make inferences about the world. However, the cognitive processes involved would be different from the recognition heuristic (e.g., including recall of the collective recognition rates or their estimation in other ways, such as by the number of people observed to have chosen some option – see Todd and Heuvelink, 2007 ).

To summarize, the recognition heuristic is a model of memory-based inferences. It leads to good inferences in the real world if recognition is correlated with the criterion to be inferred. The heuristic is a precisely defined algorithm that gives rise to a number of specific predictions: first, recognition determines choices even when further cues on the recognized object speak against it (i.e., non-compensatory recognition use). Second, as the recognition heuristic is assumed to be a tool in the mind's adaptive toolbox, people should apply the heuristic if recognition is substantially correlated with the criterion, but not if recognition is not predictive. And third, the recognition heuristic can produce a less-is-more effect where less knowledge can lead to higher accuracy. Next we describe empirical tests of the assumptions and predictions that the heuristic makes. In Section “Ecological Analyses of Recognition,” we summarize studies that have examined the predictive power of recognition in the real world and discuss when and when not recognition is a good cue. In Section “The Recognition Heuristic as a Descriptive Model,” we turn to empirical tests of people's use of the recognition heuristic. We extract from the existing studies the boundary conditions of the recognition heuristic's use, summarize evidence for the predicted non-compensatory use of recognition and the less-is-more effect.

Ecological Analyses of Recognition

The recognition heuristic can be used as an ecologically rational tool from the mind's adaptive toolbox in situations where recognition is informative about a judgment to be made. Similarly, in problem solving (e.g., Simon, 1990 ) and in schema-based decision making (for an overview, see Goldstein and Weber, 1997 ) the assumption is made that recognition memory helps in quickly accessing knowledge structures that previously proved relevant in similar tasks. In what domains is recognition informative for making inferences – that is, where is it correlated with objective quantities? The degree to which recognition predicts a criterion in a given domain can be assessed in two ways. The first is to determine for a group of people their individual recognition validities α (based on their individual rates of recognizing the objects in a reference class) and then take the average recognition validities as an estimate of recognition's predictive value (for a critical discussion, see Katsikopoulos, 2010 ). A second possibility is to use the recognition responses of the group to calculate the correlation between the objects’ collective recognition rates (defined as the proportion of people recognizing each object) and their criterion values, yielding the recognition correlation (Goldstein and Gigerenzer, 2002 ). When deviations from a perfect association between recognition rates and the criterion are due to unsystematic error (i.e., when objects with higher criterion values are as likely to be unrecognized as objects with lower criterion values are likely to be recognized), the two measures are related as follows (Pachur, 2010 ):

where r s is the recognition correlation expressed as a Spearman rank correlation.

When is recognition a good predictor?

Goldstein and Gigerenzer ( 2002 ) gave an initial overview of domains where recognition is a good predictor of particular criteria. Across a broad set of geographic domains, such as deserts, cities, lakes, and rivers, with criterion values corresponding to size or length, they found average recognition validities ranging between 0.64 and 0.95. Since then, high recognition validities in geographic domains have been replicated repeatedly and across a number of different countries (e.g., Pohl, 2006 ; Pachur et al., 2008 ). For instance, in an analysis of the 50 largest cities of four European countries (Italy, France, England, and Spain), Pachur et al. ( 2008 ) found recognition validities between 0.72 and 0.78.

The criterion values of objects in geographic domains (e.g., river lengths, city sizes) are relatively stable and do not change much over time (or only rather slowly), often allowing an association between people's recognition of the objects and the objects’ criterion values to arise. Surprisingly, however, recognition also seems to be a valid predictor in dynamic environments. One example for a dynamic environment is sports, where new stars can rise quickly and previous ones remain well-known long after their peak performance. Serwe and Frings ( 2006 ) assessed how well the recognition heuristic was able to forecast the winners of the tennis matches at the 2003 Wimbledon tournament. This is a difficult problem: two Association of Tennis Professionals (ATP) rankings, that consider detailed accounts of the players’ past performance, predicted 66 and 68% of the matches correctly, and the seedings of the Wimbledon experts predicted 69%. Serwe and Frings ( 2006 ) asked German tennis amateurs to indicate which of the tournament players they recognized. Although some of the players that the amateurs recognized were no longer very successful or were highly recognized primarily because they were also German, the recognition heuristic, using the individual recognition of players by the tennis amateurs, nonetheless correctly predicted 73% of the matches in which it could be applied and collective recognition similarly predicted 72% (for a replication see Scheibehenne and Bröder, 2007 ). Extensive knowledge (such as the player rankings) produced fewer correct forecasts than systematic partial ignorance (i.e., partial recognition). As knowledge in many domains is limited and partial, just like that of the amateur tennis players, the recognition heuristic can be an efficient tool for decision making when recognition is correlated with a criterion in the real world.

Further analyses have confirmed the accuracy of recognition in the sports domain. In a study on forecasts of the matches of the 2004 European Soccer Championship, Pachur and Biele ( 2007 ) asked laypeople which of the participating national teams they had heard of before. Using laypeople's recognition, the authors then found that strictly following the recognition heuristic would have led, on average, to 71% correct forecasts. However, while this was significantly better than chance performance, the authors could not replicate the finding by Serwe and Frings ( 2006 ) that recognition enables better forecasts than expert information: Fédération Internationale de Football Association (FIFA) rankings and rankings based on the previous performance of the teams achieved 85 and 89% correct forecasts, respectively. Finally, Snook and Cullen ( 2006 ) found in a study with Newfoundland students that their recognition led to an average of 85% correct judgments for the task of determining which of two National Hockey League (NHL) players had more career points.

In addition to sports, recognition has been shown to be useful in other competitive domains, such as political elections (Marewski et al., 2005 ), quality of US colleges (Hertwig and Todd, 2003 ), wealth of individual persons (Frosch et al., 2007 ), and performance of stocks (Borges et al., 1999 ; Ortmann et al., 2008 ; but see Boyd, 2001 , for a possible restriction of that domain to rising stock markets). Thus, even in environments where objects can change their values on the criterion dimension rather quickly, recognition can prove to be a powerful predictor. Furthermore, forgetting can play a crucial role in maintaining an effective level of ignorance in such environments. To the degree that objects that objects with small criterion values are mentioned infrequently in the media and mention frequency is correlated with the activation in memory, objects with small criterion values are also more likely to be forgotten (Schooler and Hertwig, 2005 ).

When is recognition not a good predictor?

Despite the apparent breadth of domains in which recognition can be exploited to infer a criterion, recognition, of course, does not predict everything. In which kinds of environments does it fail? First, recognition will not be correlated with criteria where people or the media talk about everything along the criterion dimension equally often (or equally rarely) or talk primarily about both ends of the dimension (e.g., very large and very small countries, or tiny and giant animals). In such cases more mentions of an object (and hence greater recognition) does not imply a high criterion value. To illustrate, Pohl ( 2006 ) found that the population of Swiss cities, but not their distance from the city Interlaken, is correlated with recognition. Correspondingly, he reported high reliance on the recognition heuristic when people were asked to judge which of two cities is larger, while when asked to judge which was closer to Interlaken, the reliance on recognition dropped to chance level.

Second (and relatedly), recognition does not seem to be a good predictor for criteria where the frequency of mentions in the media is driven by two (or more) factors that are themselves negatively correlated. Figure ​ Figure1 1 illustrates this situation. For instance, frequent diseases are often discussed and written about because they can affect many people. At the same time, deadly or otherwise severe diseases are also often talked about – but severe diseases tend to be rather rare (Ewald, 1994 ). Mentions in the media and recognition of diseases are thus driven by factors that are negatively correlated (i.e., frequency of occurrence and severity). As a result, recognition is a relatively weak predictor of the frequency of occurrence of diseases: a recognized infectious disease is more common than an unrecognized one only about 60% of the time (Pachur and Hertwig, 2006 ). Similarly, for inferring the relative population size of animal species (a domain studied by Richter and Späth, 2006 ), recognition is unlikely to be a good predictor: while animal species with a large population (e.g., pigeons) are often well-known, so are endangered – and thus rare – species (e.g., pandas). Consistent with this ecological analysis, Richter and Späth ( 2006 ) reported little reliance on the recognition heuristic.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-02-00147-g001.jpg

Hypothetical plot for a task environment in which the recognition heuristic is not ecologically rational: predicting the frequency of diseases . Here, the number of mentions of a disease in the media (and thus its recognition) increases toward both extremes of the criterion dimension, for negatively correlated reasons (frequency vs. severity). As a consequence, recognition is uncorrelated with the criterion, and α is around 0.5.

In sum, there is evidence that recognition is highly informative in many real-world domains and we are beginning to understand the conditions under which recognition is informative (though we do not have a complete theory of these conditions yet). Importantly, also other information extracted from previous encounters with objects in real-world domains seems to be informative and exploitable for making inferences, such as fluency (Hertwig et al., 2008 ) and some measures of availability (Hertwig et al., 2005 ). Environmental analyses are a first step in understanding the ecological rationality of decision mechanisms that use any of these sources of information.

The Recognition Heuristic as a Descriptive Model

Whereas the previous section reviewed findings supporting the recognition heuristic as an ecologically rational inference tool, this section provides an overview of studies that have investigated how well the model predicts human behavior. The recognition heuristic has been tested in a wide variety of domains and situations, making it possible to map more systematically the conditions under which the heuristic describes behavior more or less accurately. We will start with evidence showing that, as predicted by the recognition heuristic, many decisions align with recognition. This will be followed by a discussion of conditions under which people seem to systematically avoid basing their decisions on recognition. In the third and fourth parts of this section, we turn to tests of the recognition heuristic's bold predictions of non-compensatory processing (i.e., that all other cues beyond recognition are ignored) and that less can be more.

When do people's decisions follow recognition?

The recognition heuristic in inference tasks.

In general, in domains where recognition is a good predictor (i.e., when the recognition validity α is high), a large proportion of people's judgments in laboratory experiments are in line with the recognition heuristic's predicted choices (typically around 90%). Goldstein and Gigerenzer ( 2002 ) observed that when American students were asked which of two German cities is larger (a domain for which Gigerenzer and Goldstein, 1996 , reported a recognition validity of 0.80) and they recognized one city but not the other, they picked the recognized one in 89% of the cases (and were consequently correct 71% of the time). Similarly high rates of decisions in line with recognition were found for Swiss, Belgian, Italian (Pohl, 2006 ), and British cities (Pachur et al., 2008 ), all of which are domains where the recognition validity is high. Pohl ( 2006 ; Experiment 4) found evidence for a frequent use of the recognition heuristic for other geographic materials, such as mountains, rivers, and islands.

In their application of the recognition heuristic to the sports domain, Snook and Cullen ( 2006 ) asked their participants to judge the relative number of career points achieved by different NHL players. As mentioned above, recognition is a highly useful piece of information for this task, and accordingly, a recognized player was chosen over an unrecognized one 95% of the time, even when participants had no further knowledge about the recognized player. This also led them to correct inferences 87% of the time.

The recognition heuristic in forecasting tasks

One objection to early tests of the recognition heuristic was that recognition knowledge might be confounded with criterion knowledge in inference tasks (Oppenheimer, 2003 ). In forecasting, by contrast, where the task is to judge a criterion that lies in the future, one cannot know the criterion for certain, making it possible to test this objection. Subsequently, it has been shown for predicting tennis matches (Serwe and Frings, 2006 ; Scheibehenne and Bröder, 2007 ), soccer games (Ayton and önkal, 2004 ; Pachur and Biele, 2007 ), and political elections (Marewski et al., 2005 ) that people choose a recognized object over an unrecognized one even when making forecasts (around 80–90% of the time). Similarly, though not a direct test of the recognition heuristic, Weber et al. ( 2005 ) found that name recognition of a stock was associated with less perceived future riskiness, which, in turn, led to a higher tendency to decide to invest in the stock.

When do people not follow recognition?

The evidence just reviewed suggests that in particular environments people might exploit the fact that they have heard of one object but not another to infer further differences between the objects. Yet an adaptive use of the recognition heuristic also requires that people do not always follow recognition. We now consider characteristics of task environments that make them inappropriate for the application of the recognition heuristic and ask whether people tend to suspend the use of the heuristic in those cases.

Conclusive criterion knowledge

As pointed out earlier, the recognition heuristic has been proposed as a mental tool for uncertain inferences (i.e., when no local mental model can be constructed). A study by Pachur and Hertwig ( 2006 ; Study 2) suggests that, indeed, people do not use recognition information when they can construct a local mental model. When asked to judge which of two infectious diseases occurs more frequently, participants systematically chose the unrecognized disease when they knew that the recognized disease was practically eradicated – in other words, when they had conclusive criterion knowledge, which allowed them to locate the recognized object at the extreme low end of the criterion dimension. To illustrate, most participants recognized leprosy and knew that leprosy is nearly eradicated. This conclusive criterion knowledge allowed them to use a local mental model to deduce that leprosy is unlikely to be the more frequent of a pair of diseases. Accordingly, when leprosy was compared with an unrecognized disease, participants judged that the unrecognized disease was more frequent in 85% of the cases.

People's ability to construct a local mental model based on conclusive criterion knowledge is also likely an explanation for the results in Oppenheimer ( 2003 ; Experiment 1). He presented Stanford students with decision tasks comparing the population sizes of nearby cities that were highly recognized but rather small (e.g., Sausalito) with fictitious cities (a diverse set of fictitious names: Al Ahbahib, Gohaiza, Heingjing, Las Besas, Papayito, Rhavadran, Rio Del Sol, Schretzburg, Svatlanov, and Weingshe). In deciding which city was larger, participants chose the recognized city in only 37% of the cases. Participants presumably knew that the nearby cities were very small (Sausalito has around 7,000 inhabitants) and inferred that the unrecognized foreign cities may be larger.

Importantly, note that the mere availability of criterion knowledge is insufficient to construct a local mental model. Rather, the criterion knowledge must be conclusive – that is, enable the decision maker to deduce a solution. For instance, knowing that Saarbrücken has a population of 190,000 (absolute criterion knowledge) or that it is the 43rd largest city in Germany (relative criterion knowledge) does not allow one to construct a local mental model and derive that Saarbrücken must be larger (or smaller) than an unrecognized city (for which no criterion knowledge can be retrieved). As described above, a local mental model can only be derived if one believes that Saarbrücken is the largest (or smallest) city in Germany and thus – by definition – larger (or smaller) than any other city; or if the subjective intervals of possible criterion values for the two objects do not overlap. Only then criterion knowledge is conclusive, obviating the need for a probabilistic mental tool such as the recognition heuristic. Accordingly, Hilbig et al. ( 2009 ) found that the mere availability of absolute and relative criterion knowledge has no or only a weak influence on the use of the recognition heuristic.

Unknown reference class

Mixing real objects with fictitious ones in an experiment or using objects from an amalgam of reference classes makes it impossible to calculate the recognition validity and thus difficult to predict whether people will use the recognition heuristic or not. For instance, Pohl ( 2006 ; Experiment 2) used a mixed set consisting of the 20 largest Swiss cities and 20 well-known but small ski resort towns. Whereas recognition is usually highly correlated with city size, the recognition of ski resorts is mainly driven by factors other than the size of the city (e.g., skiing conditions), so recognition will be useful for the set of large cities, but not for the ski resorts. Consequently, decisions in this mixed set followed recognition considerably less frequently than in Pohl's Experiment 1 using a consistent set of large cities (75 vs. 89%).

Similarly, people may adopt strategies based on whether they believe that they are dealing with a representative or a biased sample of items. For instance, in addition to Oppenheimer’s ( 2003 ; see also McCloy et al., 2010 ; Experiment 1) tests of fictional cities being compared to recognized small towns near Palo Alto, other tests compared the fictional cities to places known for specific reasons, such as Nantucket (in a limerick), Chernobyl (nuclear disaster), or Timbuktu (in an expression). Since a reference class was not provided, and because it is hard to think of a natural reference class from which places like these would constitute a representative sample, participants may correctly infer that recognition is not valid in this artificial environment. In a clearly manipulated environment, such as that of trick questions, recognition validity may be undefined. Unable to assess the ecological validity of the recognition heuristic, it is only sensible for people to elect alternative strategies.

Low recognition validity

The key condition for the adaptive use of the recognition heuristic is its ecological rationality, when recognition accurately predicts the criterion in a given environment. Figure ​ Figure2 2 shows a summary of 11 tests of the recognition heuristic in different domains. As can be seen, people seem to follow recognition considerably less in domains where the recognition validity is very low (Pachur and Hertwig, 2006 ; Pohl, 2006 ). In fact, the average proportion of choices in line with recognition was highly correlated with the average recognition validity in the respective domain, r = 0.64 ( p = 0.03) – indicating an adaptive use of the recognition heuristic. These results suggest that the overall recognition validity in a particular domain is an important factor for whether the heuristic is applied or not 2 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-02-00147-g002.jpg

Association between recognition validity in 11 different environments and the observed proportion of inferences following the recognition heuristic .

However, both Pohl ( 2006 ; Experiments 1 and 4, but see Experiment 2) and Pachur and Hertwig ( 2006 ) found that, looking across participants in the same domain, participants did not seem to match their reliance on recognition directly to their individual recognition validity for that domain. Specifically, the individual proportions of choices in line with the heuristic were not correlated with the individual α. This interesting result along with the correlation depicted in Figure ​ Figure2 2 suggests that people know about validity differences between environments, but not necessarily about the exact validity of their own recognition knowledge in particular environments. Supporting this conclusion, Pachur et al. ( 2008 ) found that although the mean of participants’ estimates of the validity of their own recognition knowledge (to predict the size of British cities) matched the mean of their actual recognition validities perfectly (0.71 for both), the individual estimates and recognition validities were uncorrelated ( r = −0.03).

Another factor that seems to influence the use of the recognition heuristic is the way the inference problem is posed. In a set of studies, McCloy et al. ( 2010 ) and Hilbig et al. ( 2010b ) compared the use of recognition in different framings of a judgment task (i.e., “Which object is larger?” vs. “Which object is smaller?”). As it turned out, participants chose the option predicted by the recognition heuristic less often when the task was to pick the object with the lower criterion value than when the task was to pick the one with the higher criterion value. Moreover, Hilbig et al. ( 2010b ) found that participants required more time with the former than with the latter framing. There is some evidence that these asymmetries are mediated by differences in the perceived predictive value of recognition. In McCloy et al.’s ( 2010 ) study, participants rated it as more likely that they would recognize an object because it has a high criterion value than that they would not recognize an object because it has a small criterion value.

Discrediting source knowledge

According to Goldstein and Gigerenzer ( 2002 ), the rationale for the recognition heuristic's ecological rationality is the natural mediation process through which a distal criterion variable (e.g., the size of a city) increases the likelihood that the object is talked about, which, in turn, increases the likelihood that the object is recognized. Under these conditions, one can “exploit the structure of the information in natural environments” (Goldstein and Gigerenzer, 2002 , p. 76). When recognition is due to an experimental manipulation, that is, when people recognize an object from the experiment they are in, this natural mediation process is disrupted and people might use recognition differently, or not use it at all. The fact that source knowledge affects inferences has already been shown for other assessments of memory: when people believe that their memory has been manipulated experimentally or know that it is affected by factors that are completely unrelated to the criterion dimension, they rely considerably less on fluency (Jacoby et al., 1989 ; Hertwig et al., 2008 ) or ease of retrieval (e.g., Schwarz et al., 1991 ; Oppenheimer, 2004 ).

There is some indication that this is also the case for recognition. Specifically, decisions in accordance with recognition are considerably reduced when participants can attribute their sense of recognition to the experimental procedure (Newell and Shanks, 2004 ; Bröder and Eichler, 2006 ; see discussion by Pachur et al., 2008 ). Furthermore, knowledge that an object is recognized for a reason that has nothing to do with the object's criterion value may reduce the reliance on recognition. As mentioned above, Oppenheimer ( 2003 ; Experiment 2) found that only around 30% of participants chose the recognized object when comparing cities such as Chernobyl and Timbuktu to unrecognized fictional ones. In addition to the fact that the reference class is undefined for fictional objects (see above), this finding might also be due to people's knowledge that the recognized city was known because of a nuclear disaster or a popular limerick rather than due to its size. In sum, people may consider available knowledge about the source of their recognition that indicates its validity for a given decision when they decide whether to follow the recognition heuristic or not.

Assessing the validity of recognition based on whether one has specific source knowledge might itself be done heuristically (cf. Johnson et al., 1993 ). Specifically, one might infer simply from one's ability to retrieve specific knowledge about the source of an object's recognition – for instance, that a city is recognized from a friend's description of a trip – that recognition is an unreliable cue in this case. Why? One indication that recognition is a potentially valid predictor is when an object is recognized after encountering it multiple times in many different contexts (e.g., hearing a name in several conversations with different people, or across various media), rather than through one particular, possibly biased source. Thus, easily thinking of one particular source could indicate unreliability, while difficulty in retrieving detailed information concerning a particular context in which an object was encountered could indicate that recognition has been produced by multiple sources and is therefore an ecologically valid cue. Relatedly, if an object has appeared in many different contexts, retrieving information about any specific context is associated with longer response times than when an object has appeared in only one particular context (known as the “fan effect” – Anderson, 1974 ). In other words, the fluency of retrieving a specific source might indicate whether recognition is based on a (single) biased source or not.

Given the evidence that people systematically employ the recognition heuristic in some classes of environments and not others, its use seems to involve (at least) two distinct processes. One is judging whether an object is recognized or not; the other is assessing whether recognition is a useful indicator in the given judgment task. A brain imaging study by Volz et al. ( 2006 ) obtained evidence for the neural basis of these two processes. When a decision could be made based on recognition, there was activation in the medial parietal cortex, attributed to contributions of recognition memory. In addition, there were independent changes in activation in the anterior frontomedial cortex (aFMC), a brain area involved in evaluating internal states, including self-referential processes and social-cognitive judgments (e.g., relating an aspect of the external world to oneself). The processes underlying this latter activation may be associated with evaluating whether recognition is a useful cue in the current judgment situation. Importantly, there is evidence that this evaluation occurs after the recognition judgment, or takes more cognitive resources. As Pachur and Hertwig ( 2006 ) showed, fast inferences are more likely to follow recognition than slow inferences (in an environment with low recognition validity). Similarly, compared to young adults older adults have a reduced ability to discriminate between cases where recognition is useful and where not, and this age difference is mediated by old adults’ reduced cognitive capacity (Pachur et al., 2009 ).

Does recognition give rise to non-compensatory processing?

We now review studies that have tested the most controversial prediction of the recognition heuristic – that recognition is used in a non-compensatory manner (i.e., that all other uncertain cues are ignored) 3 . Importantly, the mere observation that people often pick a recognized object is not very diagnostic in this regard, as additional cues are often correlated with recognition and a consideration of this knowledge could thus lead to the same prediction as a non-compensatory mechanism based on recognition (e.g., Gigerenzer and Goldstein, 1996 ; Hilbig and Pohl, 2008 ). Moreover, people may deviate from choosing the recognized object in every case due to errors in applying the recognition heuristic. Several approaches have therefore been developed to obviate this problem: developing measures of additional knowledge use, manipulating additional cue knowledge experimentally, and making model comparisons (for a discussion, see Pachur, in press ). We will discuss these three approaches separately and summarize the main findings of each.

Measures of additional knowledge use

Several authors have developed measures that reflect whether knowledge beyond recognition is used. Based on the signal detection theory framework, Pachur and Hertwig ( 2006 ; see also Pachur et al., 2009 ) proposed to use a d ′ index which expresses the degree to which people tend to follow recognition more when it leads to a correct inference than when it leads to an incorrect inference (for a very similar approach, see Hilbig and Pohl, 2008 ). Because according to the recognition heuristic no further cue knowledge is considered, d ′ should be zero. Usually, however, d ′ is found to be clearly larger than zero (for instance, Pachur et al., 2009 , report values between 0.34 and 1.1), indicating that at least some people consider additional knowledge (although this might partly be conclusive criterion knowledge; Pachur and Hertwig, 2006 ; Pachur et al., 2009 ). More recently, Hilbig et al. ( 2010a ) developed a formal measurement model (using a multinomial tree) that allows measuring the probability with which people ignore knowledge beyond recognition. Applying this model to several data sets, they found that people mostly use recognition in a non-compensatory way (between 62 and 76% of the time), but sometimes recruit strategies other than the recognition heuristic – strategies that do not strictly ignore additional knowledge.

Experimental manipulation of additional cue knowledge

The paradigm most frequently used for testing the non-compensatory use of recognition is to train participants on additional cue knowledge for some objects that he or she already recognizes prior to the experiment (e.g., typically cues that indicate that those objects have a small criterion value). If a person relies on the recognition heuristic and hence uses recognition in a non-compensatory way, this new knowledge beyond recognition will not affect the degree to which inferences follow recognition. That is, the recognized object should be chosen irrespective of whether the additional cue knowledge indicates that the object has a high or a low criterion value. But do people's choices conform to this prediction?

An experiment by Goldstein and Gigerenzer ( 2002 ) suggests that they do. The authors informed their US participants that in about 78% of cases, German cities that have a soccer team in the Premier League are larger than cities that do not. In addition, participants learned whether certain recognized cities had a soccer team. When later asked to pick the larger of two German cities, participants chose a recognized city over an unrecognized city in 92% of all cases even when they had learned that the recognized city had no soccer team and the additional cue information thus contradicted recognition.

Richter and Späth ( 2006 ), Newell and Fernandez ( 2006 ; Experiment 1), and Pachur et al. ( 2008 ) conducted experiments that are direct extensions of Goldstein and Gigerenzer’s ( 2002 ) original study. Participants learned new information about objects that contradicted recognition (e.g., the additional learned cue indicated that the recognized city was small). Richter and Späth ( 2006 ; Experiment 3) asked their participants to judge the relative size of American cities in 190 pairs, replacing the soccer cue used in Goldstein and Gigerenzer's study with whether the city has an international airport. Without the contradictory airport cue, 17 of 28 participants followed the recognition heuristic with zero or one exception in the 32 relevant decisions, and 95% (median 97%) of the judgments across all participants followed the recognition heuristic – see Figure ​ Figure3. 3 . When the airport cue contradicted recognition, still 17 of 28 participants made the inferences predicted by the recognition heuristic: 9 exclusively and 8 all but once (i.e., 31 out of 32 times). The median percentage of judgments in line with the recognition heuristic remained unchanged at 97%. The mean dropped to 82%, but as Figure ​ Figure3 3 shows, this does not mean that all individuals decreased in recognition heuristic adherence. Group means mask individual strategy selection (for similar results, see Figure 5 in Pachur et al., 2008 ). If we define a change as increasing or decreasing adherence by more than 1 in 32 questions, then even when facing contradictory information 43% of participants did not change, 39% conformed to the recognition heuristic less often, and 18% conformed more often. While individual differences can be clearly seen, only 4 of 28 participants did not follow the recognition heuristic in the majority of judgments, and no participant adopted an anti-recognition strategy.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-02-00147-g003.jpg

Reanalysis of Richter and Späth’s ( 2006 ) Experiment 3 based on individual data on use of recognition heuristic . The task was to infer which of two US cities has the larger population. (A) Shows the percentage of times each participant used the recognition heuristic when no contradicting cues were provided for the recognized city (with participants ordered left to right by amount of use). (B) Shows the same when participants learned that the recognized city does not have an international airport. Even when participants learned a valid cue that contradicted the recognition heuristic, a majority (17 of 28) made inferences consistent with the recognition heuristic with zero or one exceptions out of 32 decisions. (We are grateful to Richter and Späth ( 2006 ) for providing their individual data.)

Newell and Fernandez ( 2006 ) manipulated knowledge of the probability that an unrecognized city had a soccer team (which would indicate that the city is large) and subsequently asked participants to judge the relative size of these and other, unrecognized cities. If recognition were used in a non-compensatory manner, participants’ additional knowledge about whether a city has a soccer team should not affect their judgments. On the aggregate level, however, it did. The mean percentage of judgments where participants picked the recognized city was, overall, smaller when participants had learned the additional soccer team cue for that city that contradicted recognition (than when the cue supported recognition: 64 vs. 98%), and also smaller when the probability that an unrecognized city had a soccer team was high (than when the probability was low: 77 vs. 86%). However, as in Richter and Späth’s ( 2006 ) Experiment 3, the group means mask individual differences: overall, 23% of participants always chose the recognized city, irrespective of contradicting cue information (see Pachur et al., 2008 ).

In the studies of Richter and Späth ( 2006 ; Experiment 2) and Pachur et al. ( 2008 ), recognition was contradicted by not just one, but by up to three cues 4 . Would choices still follow recognition in this situation, as predicted by the recognition heuristic? Pachur et al. ( 2008 ), whose participants were taught up to three additional cues about British cities and subsequently asked to judge the cities’ relative sizes, observed higher proportions of participants ignoring further cue knowledge than using it: between 48 and 60% of their participants picked the recognized city with zero exceptions. That is, a large proportion of participants followed the choice indicated by recognition even when it was contradicted by three additional cues (for similar results, see Richter and Späth, 2006 , as reanalyzed in Gigerenzer and Brighton, 2009 ). Importantly, this occurred although most participants perceived several of the other cues as having a higher validity than recognition.

In summary, while some analyses on the aggregate level appear to suggest that recognition is processed in a compensatory fashion – inconsistent with the recognition heuristic – analyzing individual decision makers show, however, that the recognition heuristic often captures the decisions of a large proportion of people. Thus, aggregate analyses can hide important individual differences. Still, the results also indicate that some people do rely on different strategies, either compensatory or non-compensatory ones. How well does the recognition heuristic compare to alternative strategies in describing people's inferences?

Model comparisons

Whereas tests of the recognition heuristic have mainly focused on qualitative predictions of the heuristic without pitting it against other models (but see Pachur and Biele, 2007 ), Marewski et al. ( 2010 ) compared the recognition heuristic with several alternative models – both compensatory and non-compensatory ones. Such competitive tests are important because, as a model always has to make simplifying assumptions, its predictions will necessarily deviate from reality. Therefore, showing that data deviate from a model's prediction does not necessarily make the model irrelevant (Pachur, in press). Instead, the model remains useful as long as no other model is better able to account for the data. As it turned out from Marewski et al. ( 2010 ) comparison, none of the alternative strategies, covering a large range of different processes, was able to outperform the recognition heuristic. In other words, although people sometimes cannot help but notice cues beyond recognition, they do not seem to do this as systematically as predicted by compensatory strategies.

An important insight emerging from empirical tests of the recognition heuristic is that we need to better understand individual differences in strategy use. Pachur et al. ( 2009 ) studied individual differences in how recognition is used by comparing decision making by younger and older adults, whose cognitive systems usually differ in ways potentially relevant for the use of recognition. As mentioned above, due to their reduced cognitive resources old adults have a constrained ability to judge whether recognition is useful for a given task or not – and thus are limited in their adaptive use of recognition. Such age-comparative studies on fast-and-frugal heuristics have begun to provide intriguing results concerning the adaptive use of these heuristics and their role in older adults’ decision making (e.g., Mata et al., 2007 ).

Finally, what is the state of evidence for the less-is-more effect predicted (under specific conditions) by the recognition heuristic? Several demonstrations have shown that having heard of a larger number of objects is sometimes associated with lower inferential accuracy, both for individual decision makers (e.g., Goldstein and Gigerenzer, 2002 ) and groups (Reimer and Katsikopoulos, 2004 ). However, reviewing 10 data sets where the recognition validity was larger than the knowledge validity – one of the conditions for the effect highlighted by Goldstein and Gigerenzer ( 2002 ) – Pachur ( 2010 ) concluded that the evidence for the effect is mixed. Some have argued that the absence of the less-is-more effect provides evidence against people's use of the recognition heuristic (e.g., Hilbig et al., 2010a ). However, Pachur ( 2010 ) also found that in many data sets, recognition and knowledge validities are correlated with the number of recognized objects, violating the condition in Goldstein and Gigerenzer's analysis that α and β are constant across the number of recognized objects. Via computer simulations Pachur showed how the correlation of the recognition validity and the knowledge validity on the number of recognized objects affects the predicted existence and size of the less-is-more effect. In other words, although the recognition heuristic can lead to a less-is-more effect under certain conditions, some of these conditions seem to be rather uncommon in the real world. As a consequence, clear manifestations of the less-is-more effect may be difficult to find, even if people often use the recognition heuristic.

The recognition heuristic offers a parsimonious model of how recognition is exploited in inferences from memory. Experimental tests have shown that this simple model captures several key empirical findings, such as the often dominating impact of recognition on decisions, the contingent nature of the reliance on recognition, and the counterintuitive result that limited knowledge can outwit extensive knowledge. Nevertheless, the model cannot explain every single judgment. Studies that tested the extreme hypothesis that people would rely on the recognition heuristic in 100 or 99.2% of the cases they face (e.g., Hilbig et al., 2010a ) have concluded that many people violate such a pattern and appear to recruit cue knowledge beyond recognition. Such deviations from recognition heuristic use may suggest an adaptive deployment of the heuristic in some occasions that is sensitive to factors beyond the recognition validity in an environment, and a switch to other strategies that have not yet been identified. Currently we know little of the conditions under which this switch occurs and how systematically it happens; it is therefore unclear how these empirical violations of the recognition heuristic should be modeled. Also important, we need to better understand the evaluation processes preceding the heuristic's use and the reasons for the individual differences in the use of the heuristic (Pachur and Hertwig, 2006 ; Pachur, in press ).

Connecting the Recognition Heuristic to Memory Models

In their model of the recognition heuristic Goldstein and Gigerenzer ( 2002 ) focused on describing how information is searched for and when this search is stopped. They did not provide a model for the process underlying the recognition judgment itself. Some authors have criticized this omission (e.g., Dougherty et al., 2008 ). And indeed, subsequent developments have, by combining models of recognition-based inference with models of memory, given rise to new predictions of recognition-based inference that result from the processes of recognition memory. For instance, implementing the recognition heuristic within the ACT-R architecture (Anderson et al., 2004 ), Schooler and Hertwig ( 2005 ) have shown that the less-is-more effect can arise through forgetting. Specifically, if an object's criterion values is linked to mention frequency in the environment, which in turn influences activation in memory, small objects are more likely to decay in activation and thus become unrecognized than large objects. As a result of this systematic pattern in forgetting, memory decay can increase the recognition validity. In further ACT-R analyses, Marewski and Schooler ( in press ) show that the recognition heuristic is most likely to produce accurate inferences when knowledge is available about the recognized object.

A second example is Dougherty et al.’s ( 2008 ) work on the recognition heuristic using Hintzman's MINERVA model (see also Pachur, 2010 ). In computer simulations, the authors identified alternative accounts for the less-is-more effect and demonstrated that the effect can also occur if inferences are based on a mechanism using continuous familiarity. Third, Erdfelder et al. ( 2011 ) connected the recognition heuristic with a two-high-threshold memory model, that also accounts for the process underlying the recognition judgment. The authors showed that this combined model can explain response time patterns that Hilbig and Pohl ( 2009 ) interpreted as evidence for a compensatory use of recognition. Finally, using a signal detection theory framework several authors have analyzed the implications of specific performance patterns in recognition memory (i.e., hit and false alarm rates) for the accuracy of the recognition heuristic (Pleskac, 2007 ; Katsikopoulos, 2010 ). For instance, extending Goldstein and Gigerenzer’s ( 2002 ) original analyses of the less-is-more effect for the case of imperfect memory, Katsikopoulos ( 2010 ) showed that for the less-is-more effect to occur the recognition validity does not need to be larger than the knowledge validity.

Related Judgment Phenomena Based on Memory of Previous Encounters

In this final section, we connect the recognition heuristic to several classical phenomena (such as the reiteration effect and the mere exposure effect) in judgment research that also describe how memory of past exposure to objects can be exploited to make inferences about unknown aspects of the environment. What is different about the recognition heuristic, however, is its precise account of the process involved in making an inference. The purpose of this connection is to highlight how research on the recognition heuristic could inspire novel questions in the study of these phenomena and to demonstrate that the special status of recognition may pervade decision making more generally.

Inferences about the truth of a statement

A common inference problem in the real world is to judge whether a statement encountered is correct or false. What is the role of recognition, or more generally memory traces created by previous encounters with a statement, when making such inferences? Hasher et al. ( 1977 ) presented participants, over a period of 3 weeks, with statements that were either true or false (e.g., “the People's Republic of China was founded in 1947”). Most of the statements appeared only once, but some were presented repeatedly across the three sessions. Hasher et al. ( 1977 ) found that when participants subsequently indicated their confidence that a statement was true, they expressed an increasing confidence in the veracity of a statement the more frequently it was repeated. This reiteration effect (or frequency–validity effect) can be taken to indicate that participants used the strength of the memory traces of the statements as an indication of how likely the statement was to be true.

The reiteration effect is closely related to findings by Gilbert and colleagues, who presented their participants with a series of statements followed by information as to whether each statement was true or false (Gilbert et al., 1990 , 1993 ; Gilbert, 1991 ; but see Hasson et al., 2005 ). When participants had an uncertain basis for assessing the statement's veracity (e.g., because they were distracted during processing; Gilbert et al., 1990 ), participants showed a stronger tendency to misclassify a previously seen false statement as true than to misclassify a true statement as false. In contrast, previously unseen statements were often classified as false. So even single previous encounters may be used by people to infer something about a statement, namely, that it is true. Although this default to believe a previously seen statement can be overturned, making such a switch appears to require additional cognitive resources: when under time pressure or cognitive load, participants tended to treat as true even statements they were previously informed were false (Gilbert et al., 1993 ). This parallels the finding for the recognition heuristic that under time pressure people tend to ascribe recognized objects a higher criterion value than unrecognized objects even when recognition is a poor cue (Pachur and Hertwig, 2006 ). Interestingly, Gilbert et al. ( 1990 ) also mentioned that the initial belief in the truth of statements that one encounters “may be economical and…adaptive” (p. 612), thus offering a potential link to the concept of ecological rationality. Finally, in parallel with McCloy et al.’s ( 2010 ) demonstration of framing effects in recognition-based inference (see above), Gilbert ( 1991 ) argued that “acceptance is psychologically prior” to rejection of the truth of a statement (p. 116).

The decisions considered so far involved categorical judgments about the environment, such as, which is larger: A or B? Is the statement X true or false? But often we have to make an absolute estimate regarding some aspect of an object and come up with a numerical value (e.g., the number of inhabitants of a city). Is information about whether one has heard of an object also used for estimation? Results by Brown ( 2002 ), who studied people's estimations of dates of events and country sizes, suggest that they do 5 . Specifically, participants estimated unrecognized events as occurring a rather long time ago and unfamiliar countries as having rather small populations. People thus seem to take their ignorance as useful information for where to locate an object on a quantitative dimension even in absolute estimation. Compared to the recognition heuristic, the processes involved in estimation are probably more complex, using metric and distribution knowledge to convert ignorance into a quantitative estimate. Lee and Brown ( 2004 ) proposed a model describing how people make date estimates of unknown events by combining the fact that they are not recognized with other information provided by the task.

Preference and ascription of positive meaning

So far we have looked at recognition-based inferences about objective characteristics of the environment. What about the effects of previous encounters on preferences, for which there is no objective criterion? As shown in the mere exposure effect (Zajonc, 1968 ), repeatedly encountering an object results in an increased liking or preference for the object. In addition, objects such as symbols are generally ascribed a more positive meaning the more often they have been encountered. This indicates that memory traces of previous encounters are also used for constructing one's affective responses to the environment. However, it is important to stress that in contrast to the recognition heuristic, these effects do not require that the object is recognized as having been seen before. Hence, the recognition heuristic cannot account for the mere exposure effect. The fluency heuristic (Schooler and Hertwig, 2005 ) is one possible mechanism by which (consciously) unrecognized objects may gain preference through repeated exposure (and the same process may also apply to inferences between unrecognized objects).

In a direct test of the recognition heuristic in preference judgments, Oeusoonthornwattana and Shanks ( 2010 ) first taught participants information about various brands (e.g., “Ecover has been engaging in animal testing.”) and then asked them whether they would rather pick a product (e.g., shower gel) of a recognized brand – for which they had learned additional knowledge – or a product of an unrecognized brand. The authors concluded that recognition had a large influence on the preferences, but a large majority of participants also considered the taught knowledge. In fact, however, it is rather unclear what these findings mean for the recognition heuristic. As in most studies on the recognition heuristic, Oeusoonthornwattana and Shanks ( 2010 ) tested no alterative model. So although participants sometimes did not rely on the recognition heuristic, it may still provide the best available model for the observed data. In addition, the taught knowledge about the brands concerned information about the ethical standards of the brands, which might receive special attention compared to more common product information. Due to these methodological problems, we still know relatively little about the role of the recognition heuristic in consumer preferences. In fact, several authors have emphasized the apparently dominant impact of brand recognition in consumer choice (e.g., Allison and Uhl, 1964 ; Hoyer and Brown, 1990 ).

The recognition heuristic was proposed as a model of how recognition, reflecting a particular statistical structure related to experiencing objects in the environment, can be exploited by a smart and simple mechanism to make inferences about those objects. By virtue of its precise formulation allowing for clear-cut predictions, the recognition heuristic has been the focus of a large number of studies in a relatively short time. The studies indicate that a majority of people consistently rely on the recognition heuristic when it is ecologically rational, signaling its adaptive use. It thus offers perhaps the simplest realization of Herbert Simon's notion that boundedly rational decision making can arise from simple mental tools that are matched to the structure of the environment.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1 Fluency could thus function as a proxy for frequency information, but there is also evidence that people use both types of information independently (e.g., Schwarz and Vaughn, 2002 ).

2 Some results, however, suggest that people only decide not to follow recognition in domains with low recognition validity when they have alternative knowledge available that has a higher validity than recognition (Pachur and Biele, 2007 ; Hertwig et al., 2008 ).

3 We focus on studies that have examined inferences from memory, the context for which fast-and-frugal heuristics were originally proposed. Other experiments in which recognition “knowledge” was given to people along with other cues on a computer screen in an inferences-from-givens setup are not appropriate tests of this predicted non-compensatory use of recognition (e.g., Newell and Shanks’s, 2004 , study, in which participants were told that they recognized an imaginary company).

4 The experiment by Bröder and Eichler ( 2006 ) followed a similar methodology but involved experimentally induced rather than natural recognition and so is not discussed here.

5 A similar observation was made by Pachur and Hertwig ( 2006 ): in an estimation task, people assigned unrecognized diseases to intermediate, rather than extremely low, frequency categories.

  • Allison R. I., Uhl K. P. (1964). Influence of beer brand identification on taste perception . J. Mark. Res. 1 , 36–39 10.2307/3150054 [ CrossRef ] [ Google Scholar ]
  • Anderson J. R. (1974). Retrieval of propositional information from long-term memory . Cogn. Psychol. 5 , 451–474 10.1016/0010-0285(74)90021-8 [ CrossRef ] [ Google Scholar ]
  • Anderson J. R., Bothell D., Byrne M. D., Douglass S., Lebiere C., Qin Y. (2004). An integrated theory of the mind . Psychol. Rev. 111 , 1036–1060 10.1037/0033-295X.111.4.1036 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ayton P., önkal D. (2004). Effects of Ignorance and Information on Judgmental Forecasting. London: City University (unpublished manuscript) [ Google Scholar ]
  • Borges B., Goldstein D. G., Ortmann A., Gigerenzer G. (1999). “Can ignorance beat the stock market?” in Simple Heuristics That Make us Smart , eds Gigerenzer G., Todd P. M. The ABC Research Group (New York: Oxford University Press; ), 59–72 [ Google Scholar ]
  • Boyd M. (2001). On ignorance, intuition, and investing: a bear market test of the recognition heuristic . J. Psychol. Financ. Markets 2 , 150–156 [ Google Scholar ]
  • Bröder A., Eichler A. (2006). The use of recognition information and additional cues in inferences from memory . Acta Psychol. (Amst.) 121 , 275–284 10.1016/j.actpsy.2005.07.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown N. R. (2002). Real-world estimation: estimation modes and seeding effects . Psychol. Learn. Motiv. 41 , 321–359 [ Google Scholar ]
  • Brunswik E. (1943). Organismic achievement and environmental probability . Psychol. Rev. 50 , 255–272 10.1037/h0060889 [ CrossRef ] [ Google Scholar ]
  • Bullock S., Todd P. M. (1999). Made to measure: ecological rationality in structured environments . Minds Mach. 9 , 497–541 10.1023/A:1008352717581 [ CrossRef ] [ Google Scholar ]
  • Davis-Stober C. P., Dana J., Budescu D. V. (2010). Why recognition is rational: optimality results on single-variable decision rules . Judgm. Decis. Mak. 5 , 216–229 [ Google Scholar ]
  • Dougherty M. R., Franco-Watkins A. M., Thomas R. (2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics . Psychol. Rev. 115 , 199–213 10.1037/0033-295X.115.1.211 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Erdfelder E., Küpper-Tetzel C. E., Mattern S. D. (2011). Threshold models of recognition and the recognition heuristic . Judgm. Decis. Mak. 6 , 7–22 [ Google Scholar ]
  • Ewald P. W. (1994). Evolution of Infectious Diseases. Oxford: Oxford University Press [ Google Scholar ]
  • Ford J. K., Schmitt N., Schechtman S. L., Hults B. H., Doherty M. L. (1989). Process tracing methods: contributions, problems, and neglected research questions . Organ. Behav. Decis. Process. 43 , 75–117 10.1016/0749-5978(89)90059-9 [ CrossRef ] [ Google Scholar ]
  • Frosch C., Beaman C. P., McCloy R. (2007). A little learning is a dangerous thing: an experimental demonstration of ignorance-driven inference . Q. J. Exp. Psychol. 60 , 1329–1336 10.1080/17470210701507949 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Galef B. G., Jr., McQuoid L. M., Whiskin E. E. (1990). Further evidence that Norway rats do not socially transmit learned aversions to toxic baits . Anim. Learn. Behav. 18 , 199–205 10.3758/BF03205316 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Brighton H. (2009). Homo heuristicus: why biased minds make better inferences . Top. Cogn. Sci. 1 , 107–143 10.1111/j.1756-8765.2008.01006.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Goldstein D. G. (1996). Reasoning the fast and frugal way: models of bounded rationality . Psychol. Rev. 103 , 650–669 10.1037/0033-295X.103.3.592 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Goldstein D. G. (2011). The recognition heuristic: a decade of research . Judgm. Decis. Mak. 6 , 100–121 [ Google Scholar ]
  • Gigerenzer G., Hoffrage U., Kleinbölting H. (1991). Probabilistic mental models: a Brunswikian theory of confidence . Psychol. Rev. 98 , 506–528 10.1037/0033-295X.98.4.506 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Todd P. M., The ABC Research Group (1999). Simple Heuristics That Make us Smart. New York: Oxford University Press [ Google Scholar ]
  • Gilbert D. T. (1991). How mental systems believe . Am. Psychol. 46 , 107–119 10.1037/0003-066X.46.2.107 [ CrossRef ] [ Google Scholar ]
  • Gilbert D. T., Krull D. S., Malone P. S. (1990). Unbelieving the unbelievable: some problems in the rejection of false information . J. Pers. Soc. Psychol. 59 , 601–613 10.1037/0022-3514.59.4.601 [ CrossRef ] [ Google Scholar ]
  • Gilbert D. T., Tafarodi R. W., Malone P. S. (1993). You can't not believe everything you read . J. Pers. Soc. Psychol. 65 , 221–233 10.1037/0022-3514.65.2.221 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goldstein D. G., Gigerenzer G. (1999). “The recognition heuristic: how ignorance makes us smart,” in Simple Heuristics That Make us Smart , eds Gigerenzer G., Todd P. M., The ABC Research Group (New York: Oxford University Press; ), 37–58 [ Google Scholar ]
  • Goldstein D. G., Gigerenzer G. (2002). Models of ecological rationality: the recognition heuristic . Psychol. Rev. 109 , 75–90 10.1037/0033-295X.109.1.75 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goldstein W. M., Weber E. U. (1997). “Content and discontent: indications and implications of domain specificity in preferential decision making,” in Research on Judgment and Decision Making: Currents, Connections and Controversies , eds Goldstein W. M., Hogarth R. M. (Cambridge: Cambridge University Press; ), 566–617 [ Google Scholar ]
  • Hasher L., Goldstein D., Toppino T. (1977). Frequency and the conference of referential validity . J. Verbal Learn. Verbal Behav. 16 , 107–112 10.1016/S0022-5371(77)80012-1 [ CrossRef ] [ Google Scholar ]
  • Hasson U., Simmons J. P., Todorov A. (2005). Believe it or not: on the possibility of suspending belief . Psychol. Sci. 16 , 566–571 10.1111/j.0956-7976.2005.01576.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hertwig R., Herzog S. M., Schooler L. J., Reimer T. (2008). Fluency heuristic: a model of how the mind exploits a by-product of information retrieval . J. Exp. Psychol. Learn. Mem. Cogn. 34 , 1191–1206 10.1037/a0013025 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hertwig R., Pachur T., Kurzenhäuser S. (2005). Judgments of risk frequencies: tests of possible cognitive mechanisms . J. Exp. Psychol. Learn. Mem. Cogn. 31 , 621–642 10.1037/0278-7393.31.4.621 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hertwig R., Todd P. M. (2003). “More is not always better: the benefits of cognitive limits,” in Thinking: Psychological Perspectives on Reasoning, Judgment and Decision Making , eds Hardman D., Macchi L. (Chichester: Wiley; ), 213–231 [ Google Scholar ]
  • Hilbig B. E., Erdfelder E., Pohl R. F. (2010a). One-reason decision-making unveiled: a measurement model of the recognition heuristic . J. Exp. Psychol. Learn. Mem. Cogn. 36 , 123–134 10.1037/a0017518 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hilbig B. E., Scholl S. G., Pohl R. (2010b). Think or blink: is the recognition heuristic an “intuitive” strategy? Judgm. Decis. Mak. 5 , 272–284 [ Google Scholar ]
  • Hilbig B. E., Pohl R. F. (2008). Recognizing users of the recognition heuristic . Exp. Psychol. 55 , 394–401 [ PubMed ] [ Google Scholar ]
  • Hilbig B. E., Pohl R. F. (2009). Ignorance-versus evidence-based decision making: a decision time analysis of the recognition heuristic . J. Exp. Psychol. Learn. Mem. Cogn. 35 , 1296–1305 10.1037/a0016565 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hilbig B. E., Pohl R. F., Bröder A. (2009). Criterion knowledge: a moderator of using the recognition heuristic? J. Behav. Decis. Mak. 22 , 510–522 10.1002/bdm.644 [ CrossRef ] [ Google Scholar ]
  • Hintzman D. L., Curran T. (1994). Retrieval dynamics of recognition and frequency judgments: evidence for separate processes of familiarity and recall . J. Mem. Lang. 33 , 1–18 10.1006/jmla.1994.1001 [ CrossRef ] [ Google Scholar ]
  • Hoyer W. D., Brown S. P. (1990). Effects of brand awareness on choice for a common, repeat-purchase product . J. Consum. Res. 17 , 141–148 10.1086/208544 [ CrossRef ] [ Google Scholar ]
  • Jacoby L. L., Dallas M. (1981). On the relationship between autobiographical memory and perceptual learning . J. Exp. Psychol. Gen. 110 , 306–340 10.1037/0096-3445.110.3.306 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jacoby L. L., Kelley C., Brown J., Jasechko J. (1989). Becoming famous overnight: limits on the ability to avoid unconscious influences of the past . J. Pers. Soc. Psychol. 56 , 326–338 10.1037/0022-3514.56.3.326 [ CrossRef ] [ Google Scholar ]
  • Johnson M. K., Hastroudi S., Lindsay D. S. (1993). Source monitoring . Psychol. Bull. 114 , 3–28 10.1037/0033-2909.114.1.3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Katsikopoulos K. V. (2010). The less-is-more effect: predictions and tests . Judgm. Decis. Mak. 5 , 244–257 [ Google Scholar ]
  • Kelley C. M., Jacoby L. L. (1998). Subjective reports and process dissociation: fluency, knowing and feeling . Acta Psychol. (Amst.) 98 , 127–140 10.1016/S0001-6918(97)00039-5 [ CrossRef ] [ Google Scholar ]
  • Lee P. J., Brown N. R. (2004). The role of guessing and boundaries on date estimation biases . Psychon. Bull. Rev. 11 , 748–754 10.3758/BF03196630 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marewski J. N., Gaissmaier W., Dieckmann A., Schooler L. J., Gigerenzer G. (2005). Ignorance-based reasoning? Applying the recognition heuristic to elections . Paper Presented at the 20th Biennial Conference on Subjective Probability, Utility and Decision Making, Stockholm [ Google Scholar ]
  • Marewski J. N., Gaissmaier W., Schooler L. J., Goldstein D. G., Gigerenzer G. (2010). From recognition to decisions: extending and testing recognition-based models for multi-alternative inference . Psychon. Bull. Rev. 17 , 287–309 10.3758/PBR.17.3.287 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marewski J. N., Schooler L. J. (in press). Cognitive niches: an ecological model of strategy selection . Psychol. Rev. [ PubMed ] [ Google Scholar ]
  • Mata R., Schooler L. J., Rieskamp J. (2007). The aging decision maker: cognitive aging and the adaptive selection of decision strategies . Psychol. Aging 22 , 796–810 10.1037/0882-7974.22.4.796 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCloy R., Beaman C. P., Frosch C. A., Goddard K. (2010). Fast and frugal framing effects? J. Exp. Psychol. Learn. Mem. Cogn. 36 , 1043–1052 10.1037/a0019693 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Newell B. R., Fernandez D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: two critical tests of the recognition heuristic . J. Behav. Decis. Mak. 19 , 333–346 10.1002/bdm.531 [ CrossRef ] [ Google Scholar ]
  • Newell B. R., Shanks D. R. (2004). On the role of recognition in decision making . J. Exp. Psychol. Learn. Mem. Cogn. 30 , 923–935 10.1037/0278-7393.30.4.923 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Noble J., Todd P. M., Tuci E. (2001). Explaining social learning of food preferences without aversions: an evolutionary simulation model of Norway rats . Proc. R. Soc. Lond. B Biol. Sci. 268 , 141–149 10.1098/rspb.2000.1342 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oeusoonthornwattana O., Shanks D. R. (2010). I like what I know: is recognition a non-compensatory determiner of consumer choice? Judgm. Decis. Mak. 5 , 310–325 [ Google Scholar ]
  • Oppenheimer D. M. (2003). Not so fast! (and not so frugal!): rethinking the recognition heuristic . Cognition 90 , B1–B9 10.1016/S0010-0277(03)00141-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oppenheimer D. M. (2004). Spontaneous discounting of availability in frequency judgment tasks . Psychol. Sci. 15 , 100–105 10.1111/j.0963-7214.2004.01502005.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oppenheimer D. M. (2008). The secret life of fluency . Trends Cogn. Sci. (Regul. Ed.) 12 , 237–241 10.1016/j.tics.2008.02.014 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ortmann A., Gigerenzer G., Borges B., Goldstein D. G. (2008). “The recognition heuristic: a fast and frugal way to investment choice?” in Handbook of Experimental Economics Results: Vol. 1 (Handbooks in Economics No. 28) , eds Plott C. R., Smith V. L. (Amsterdam: North-Holland; ), 993–1003 [ Google Scholar ]
  • Pachur T. (2010). Recognition-based inference: when is less more in the real world? Psychon. Bull. Rev. 17 , 589–598 10.3758/PBR.17.5.630 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pachur T. (in press). The limited value of precise tests of the recognition heuristic . Judgm. Decis. Mak. [ Google Scholar ]
  • Pachur T., Biele G. (2007). Forecasting from ignorance: the use and usefulness of recognition in lay predictions of sports events . Acta Psychol. (Amst.) 125 , 99–116 10.1016/j.actpsy.2006.07.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pachur T., Bröder A., Marewski J. N. (2008). The recognition heuristic in memory-based inference: is recognition a non-compensatory cue? J. Behav. Decis. Mak. 21 , 183–210 10.1002/bdm.581 [ CrossRef ] [ Google Scholar ]
  • Pachur T., Hertwig R. (2006). On the psychology of the recognition heuristic: retrieval primacy as a key determinant of its use . J. Exp. Psychol. Learn. Mem. Cogn. 32 , 983–1002 10.1037/0278-7393.32.5.983 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pachur T., Mata R., Schooler L. J. (2009). Cognitive aging and the use of recognition in decision making . Psychol. Aging 24 , 901–915 10.1037/a0017211 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pleskac T. J. (2007). A signal detection analysis of the recognition heuristic . Psychon. Bull. Rev. 14 , 379–391 10.3758/BF03194081 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pohl R. (2006). Empirical tests or the recognition heuristic . J. Behav. Decis. Mak. 19 , 251–271 10.1002/bdm.522 [ CrossRef ] [ Google Scholar ]
  • Reimer T., Katsikopoulos K. V. (2004). The use of recognition in group decision-making . Cogn. Sci. 28 , 1009–1029 10.1207/s15516709cog2806_6 [ CrossRef ] [ Google Scholar ]
  • Richter T., Späth P. (2006). Recognition is used as one cue among others in judgment and decision making . J. Exp. Psychol. Learn. Mem. Cogn. 32 , 150–162 10.1037/0278-7393.32.1.150 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scheibehenne B., Bröder A. (2007). Predicting Wimbledon 2005 tennis results by mere player name recognition . Int. J. Forecast. 23 , 415–426 10.1016/j.ijforecast.2007.05.006 [ CrossRef ] [ Google Scholar ]
  • Schooler L., Hertwig R. (2005). How forgetting aids heuristic inference . Psychol. Rev. 112 , 610–628 10.1037/0033-295X.112.3.610 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwarz N., Bless H., Strack F., Klumpp G., Rittenauer-Schatka H., Simons A. (1991). Ease of retrieval as information: another look at the availability heuristic . J. Pers. Soc. Psychol. 61 , 195–202 10.1037/0022-3514.61.2.195 [ CrossRef ] [ Google Scholar ]
  • Schwarz N., Vaughn L. A. (2002). “The availability heuristic revisited: ease of recall and content of recall as distinct sources of information,” in Heuristics and Biases: The Psychology of Intuitive Judgment , eds Gilovich T., Griffin D., Kahneman D. (New York: Cambridge University Press; ), 103–119 [ Google Scholar ]
  • Serwe S., Frings C. (2006). Who will win Wimbledon 2003? The recognition heuristic in predicting sports events . J. Behav. Decis. Mak. 19 , 321–332 10.1002/bdm.530 [ CrossRef ] [ Google Scholar ]
  • Simon H. A. (1990). Invariants of human behavior . Annu. Rev. Psychol. 41 , 1–19 10.1146/annurev.ps.41.020190.000245 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smithson M. (2010). When less is more in the recognition heuristic . Judgm. Decis. Mak. 5 , 230–243 [ Google Scholar ]
  • Snook B., Cullen R. M. (2006). Recognizing national hockey league greatness with an ignorance-based heuristic . Can. J. Exp. Psychol. 60 , 33–43 [ PubMed ] [ Google Scholar ]
  • Todd P. M., Heuvelink A. (2007). “Shaping social environments with simple recognition heuristics,” The Innate Mind, Vol. 2, Culture and Cognition , eds Carruthers P., Laurence S., Stich S. (Oxford: Oxford University Press; ), 165–180 [ Google Scholar ]
  • Tversky A., Kahneman D. (1973). Availability: a heuristic for judging frequency and probability . Cogn. Psychol. 5 , 207–232 10.1016/0010-0285(73)90033-9 [ CrossRef ] [ Google Scholar ]
  • Volz K. G., Schooler L. J., Schubotz R. I., Raab M., Gigerenzer G., von Cramon D. Y. (2006). Why you think Milan is larger than Modena: neural correlates of the recognition heuristic . J. Cogn. Neurosci. 18 , 1924–1936 10.1162/jocn.2006.18.11.1924 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Weber E. U., Siebenmorgen N., Weber M. (2005). Communicating asset risk: how name recognition and the format of historic volatility information affect risk perception and investment decisions . Risk Anal. 25 , 597–609 10.1111/j.1539-6924.2005.00627.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zajonc R. B. (1968). Attitudinal effects of mere exposure . J. Pers. Soc. Psychol. 9 , 1–27 10.1037/h0025848 [ CrossRef ] [ Google Scholar ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 17 February 2023

A brief history of heuristics: how did research on heuristics evolve?

  • Mohamad Hjeij   ORCID: orcid.org/0000-0003-4231-1395 1 &
  • Arnis Vilks 1  

Humanities and Social Sciences Communications volume  10 , Article number:  64 ( 2023 ) Cite this article

10k Accesses

3 Citations

2 Altmetric

Metrics details

Heuristics are often characterized as rules of thumb that can be used to speed up the process of decision-making. They have been examined across a wide range of fields, including economics, psychology, and computer science. However, scholars still struggle to find substantial common ground. This study provides a historical review of heuristics as a research topic before and after the emergence of the subjective expected utility (SEU) theory, emphasising the evolutionary perspective that considers heuristics as resulting from the development of the brain. We find it useful to distinguish between deliberate and automatic uses of heuristics, but point out that they can be used consciously and subconsciously. While we can trace the idea of heuristics through many centuries and fields of application, we focus on the evolution of the modern notion of heuristics through three waves of research, starting with Herbert Simon in the 1950s, who introduced the notion of bounded rationality and suggested the use of heuristics in artificial intelligence, thereby paving the way for all later research on heuristics. A breakthrough came with Daniel Kahneman and Amos Tversky in the 1970s, who analysed the biases arising from using heuristics. The resulting research programme became the subject of criticism by Gerd Gigerenzer in the 1990s, who argues that an ‘adaptive toolbox’ consisting of ‘fast-and-frugal’ heuristics can yield ‘ecologically rational’ decisions.

Similar content being viewed by others

to state that a case study is heuristic means that

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

to state that a case study is heuristic means that

High-throughput prediction of protein conformational distributions with subsampled AlphaFold2

Gabriel Monteiro da Silva, Jennifer Y. Cui, … Brenda M. Rubenstein

to state that a case study is heuristic means that

The role of artificial intelligence in achieving the Sustainable Development Goals

Ricardo Vinuesa, Hossein Azizpour, … Francesco Fuso Nerini

Introduction

Over the past 50 years, the notion of ‘heuristics’ has considerably gained attention in fields as diverse as psychology, cognitive science, decision theory, computer science, and management scholarship. While for 1970, the Scopus database finds a meagre 20 published articles with the word ‘heuristic’ in their title, the number has increased to no less than 3783 in 2021 (Scopus, 2022 ).

We take this to be evidence that many researchers in the aforementioned fields find the literature that refers to heuristics stimulating and that it gives rise to questions that deserve further enquiry. While there are some review articles on the topic of heuristics (Gigerenzer and Gaissmaier, 2011 ; Groner et al., 1983 ; Hertwig and Pachur, 2015 ; Semaan et al., 2020 ), a somewhat comprehensive and non-partisan historical review seems to be missing.

While interest in heuristics is growing, the very notion of heuristics remains elusive to the point that, e.g., Shah and Oppenheimer ( 2008 ) begin their paper with the statement: ‘The word “heuristic” has lost its meaning.’ Even if one leaves aside characterizations such as ‘rule of thumb’ or ‘mental shortcut’ and considers what Kahneman ( 2011 ) calls ‘the technical definition of heuristic,’ namely ‘a simple procedure that helps find adequate, though often imperfect, answers to difficult questions,’ one is immediately left wondering how simple it has to be, what an adequate, but the imperfect answer is, and how difficult the questions need to be, in order to classify a procedure as a heuristic. Shah and Oppenheimer conclude that ‘the term heuristic is vague enough to describe anything’.

However, one feature does distinguish heuristics from certain other, typically more elaborate procedures: heuristics are problem-solving methods that do not guarantee an optimal solution. The use of heuristics is, therefore, inevitable where no method to find an optimal solution exists or is known to the problem-solver, in particular where the problem and/or the optimality criterion is ill-defined. However, the use of heuristics may be advantageous even where the problem to be solved is well-defined and methods do exist which would guarantee an optimal solution. This is because definitions of optimality typically ignore constraints on the process of solving the problem and the costs of that process. Compared to infallible but elaborate methods, heuristics may prove to be quicker or more efficient.

Nevertheless, the range of what has been called heuristics is very broad. Application of a heuristic may require intuition, guessing, exploration, or experience; some heuristics are rather elaborate, others are truly shortcuts, some are described in somewhat loose terms, and others are well-defined algorithms.

One procedure of decision-making that is commonly not regarded as a heuristic is the application of the full-blown theory of subjective expected utility (SEU) in the tradition of Ramsey ( 1926 ), von Neumann and Morgenstern ( 1944 ), and Savage ( 1954 ). This theory is arguably spelling out what an ideally rational decision would be, but was already seen by Savage (p. 16) to be applicable only in what he called a ‘small world’. Quite a few approaches that have been called heuristics have been explicitly motivated by SEU imposing demands on the decision-maker, which are utterly impractical (cf., e.g., Klein, 2001 , for a discussion). As a second defining feature of the heuristics we want to consider, therefore, we take them to be procedures of decision-making that differ from the ‘gold standard’ of SEU by being practically applicable in at least a number of interesting cases. Along with SEU, we also leave aside the rules of deductive logic, such as Aristotelian syllogisms, modus ponens, modus tollens, etc. While these can also be seen as rules of decision-making, and the universal validity of some of them is not quite uncontroversial (see, e.g., Priest, 2008 , for an introduction to non-classical logic), they are widely regarded as ‘infallible’. By stark contrast, it seems characteristic for heuristics that their application may fail to yield a ‘best’ or ‘correct’ result.

By taking heuristics to be practically applicable, but fallible, procedures for problem-solving, we will also neglect the literature that focuses on the adjective ‘heuristic’ instead of on the noun. When, e.g., Suppes ( 1983 ) characterizes axiomatic analyses as ‘heuristic’, he is not suggesting any rule, but he is saying that heuristic axioms ‘seem intuitively to organize and facilitate our thinking about the subject’ (p. 82), and proceeds to give examples of both heuristic and nonheuristic axioms. It may of course be said that many fundamental equations in science, such as Newton’s force = mass*acceleration, have some heuristic value in the sense indicated by Suppes, but the research we will review is not about the property of being heuristic.

Given that heuristics can be assessed against the benchmark of SEU, one may distinguish broadly between heuristics suggested pre-SEU, i.e., before the middle of the 20th century, and the later research on heuristics that had to face the challenge of an existing theory of allegedly rational decision-making. We will review the former in the section “Deliberate heuristics—the art of invention” below, and devote sections “Herbert Simon: rationality is bounded”, “Heuristics in computer science” and “Daniel Kahneman and Amos Tversky: heuristics and biases” to the latter.

To cover the paradigmatic cases of what has been termed ‘heuristics’ in the literature, we have to take ‘problem-solving’ in a broad sense that includes decision-making and judgement, but also automatic, instinctive behaviour. We, therefore, feel that an account of research on heuristics should also review the main views on how observable behaviour patterns in humans—or maybe animals in general—can be explained. This we do in the section “Automatic heuristics: learnt or innate?”.

While our brief history cannot aim for completeness, we selected the scholars to be included based on their influence and contributions to different fields of research related to heuristics. Our focus, however, will be on the more recent research that may be said to begin with Herbert Simon.

That problem-solving according to SEU will, in general, be impractical, was clearly recognized by Herbert Simon, whose notion of bounded rationality we look at in the section “Herbert Simon: rationality is bounded”. In the section “Heuristics in computer science”, we also consider heuristics in computer science, where the motivation to use heuristics is closely related to Simon’s reasoning. In the section “Daniel Kahneman and Amos Tversky: heuristics and biases”, we turn to the heuristics identified and analysed by Kahneman and Tversky; while their assessment was primarily that the use of those heuristics often does not conform to rational decision-making, the approach by Gigerenzer and his collaborators, reviewed in the section “Gerd Gigerenzer: fast-and-frugal heuristics” below, takes a much more affirmative view on the use of heuristics. Section “Critiques” explains the limitations and critiques of the corresponding ideas. The final section “Conclusion” contains the conclusion, discussion, and avenues for future research.

The evolutionary perspective

While we focus on the history of research on heuristics, it is clear that animal behaviour patterns evolved and were shaped by evolutionary forces long before the human species emerged. Thus ‘heuristics’ in the mere sense of behaviour patterns have been used long before humans engaged in any kind of conscious reflection on decision-making, let alone systematic research. However, evolution endowed humans with brains that allow them to make decisions in ways that are quite different from animal behaviour patterns. According to Gibbons ( 2007 ), the peculiar evolution of the human brain started thousands of years ago when the ancient human discovered fire and started cooking food, which reduced the amount of energy the body needed for digestion. This paved the way for a smaller intestinal tract and implied that the excess calories led to the development of larger tissues and eventually a larger brain. Through this organ, intelligence increased exponentially, resulting in advanced communication that allowed Homo sapiens to collaborate and form relationships that other primates at the time could not match. According to Dunbar ( 1998 ), it was in the time between 400,000 and 100,000 years ago that abilities to hunt more effectively took humans from the middle of the food chain right to the top.

It does not seem to be known when and how exactly the human brain developed the ability to reflect on decisions made consciously, but it is now widely recognized that in addition to the fast, automatic, and typically nonconscious type of decision-making that is similar to animal behaviour, humans also employ another, rather a different type of decision-making that can be characterized as slow, conscious, controlled, and reflective. The former type is known as ‘System 1’ or ‘the old mind’, and the latter as ‘System 2’ or ‘the new mind’ (Evans, 2010 ; Kahneman, 2011 ), and both systems have clearly evolved side by side throughout the evolution of the human brain. According to Gigerenzer ( 2021 ), humans as well as other organisms evolved to acquire what he calls ‘embodied heuristics’ that can be both innate or learnt rules of thumb, which in turn supply the agility to respond to the lack of information by fast judgement. The ‘embodied heuristics’ use the mental capacity that includes the motor and sensory abilities that start to develop from the moment of birth.

While a detailed discussion of the ‘dual-process theories’ of the mind is beyond the scope of this paper, we find it helpful to point out that one may distinguish between ‘System 1 heuristics’ and ‘System 2 heuristics’ (Kahneman 2011 , p. 98). While some ‘rules of decision-making’ may be hard-wired into the human species by its genes and physiology, others are complicated enough that their application typically requires reflection and conscious mental effort. Upon reflection, however, the two systems are not as separate as they may seem. For example, participants in the Mental Calculation World Cup perform mathematical tasks instantly, whereas ordinary people would need a pen and paper or a calculator. Today, many people cannot multiply large numbers or calculate a square root using only a pen and paper but can easily do this using the calculator app on their smartphone. Thus, what can be done by spontaneous effortless calculation by some, may for others require the application of a more or less complicated theory.

Nevertheless, one can loosely characterize the heuristics that have been explained and recommended for more or less well-specified purposes over the course of history as System 2 or deliberate heuristics.

Deliberate heuristics—the art of invention

Throughout history, scholars have investigated methods to solve complex tasks. In this section, we review those attempts to formulate ‘operant and voluntary’ heuristics to solve demanding problems—in particular, to generate new insights or do research in more or less specified fields. Most of the heuristics in this section have been suggested before the emergence of the SEU theory and the associated modern definition of rationality, and none of them deals with the kind of decision problems that are assumed as ‘given’ in the SEU model. The reader will notice that some historical heuristics were suggested for problems that, today, may seem too general to be solved. However, through the development of such attempts, later scholars were inspired to develop a more concrete understanding of the notion of heuristics.

The Greek origin

The term heuristic originates from the Greek verb heurísko , which means to discover or find out. The Greek word heúrēka , allegedly exclaimed by Archimedes when discovering how to measure the volume of a random object through water, derives from the same verb and can be translated as I found it! (Pinheiro and McNeill, 2014 ). Heuristics can thus be said to be etymologically related to the discipline of discovery, the branch of knowledge based on investigative procedures, and are naturally associated with trial techniques, including what-if scenarios and simple trial and error.

While the term heurísko does not seem to be used in this context by Aristotle, his notion of induction ( epagôgê ) can be seen as a method to find, but not prove, true general statements and thus as a heuristic. At any rate, Aristotle considered inductive reasoning as leading to insights and as distinct from logically valid syllogisms (Smith, 2020 ).

Pappus (4th century)

While a brief, somewhat cryptic, mention of analysis and synthesis appears in Book 13 of some, but not all, editions of Euclid’s Elements, a clearer explanation of the two methods was given in the 4th century by the Greek mathematician and astronomer Pappus of Alexandria (cf. Heath, 1926 ; Polya, 1945 ; Groner et al., 1983 ). While synthesis is what today would be called deduction from known truths, analysis is a method that can be used to try and find proof. Two slightly different explanations are given by Pappus. They boil down to this: in order to find proof for a statement A, one can deduce another statement B from A, continue by deducing yet another statement C from B, and so on, until one comes upon a statement T that is known to be true. If all the inferences are convertible, the converse deductions evidently constitute a proof of A from T. While Pappus did not mention the condition that the inferences must be convertible, his second explanation of analysis makes it clear that one must be looking for deductions from A which are both necessary and sufficient for A. In Polya’s paraphrase of Pappus’ text: ‘We enquire from what antecedent the desired result could be derived; then we enquire again what could be the antecedent of that antecedent, and so on, until passing from antecedent to antecedent, we come eventually upon something already known or admittedly true.’ Analysis thus described is hardly a ‘shortcut’ or ‘rule of thumb’, but quite clearly it is a heuristic: it may help to find a proof of A, but it may also fail to do so…

Al-Khawarizmi (9th century)

In the 9th century, the Persian thinker Mohamad Al-Khawarizmi, who resided in Baghdad’s centre of knowledge or the House of Wisdom , used stepwise methods for problem-solving. Thus, after his name and findings, the algorithm concept was derived (Boyer, 1991 ). Although a heuristic orientation has sometimes been contrasted with an algorithmic one (Groner and Groner, 1991 ), it is worth noting that an algorithm may well serve as a heuristic—certainly in the sense of a shortcut, and also in the sense of a fallible method. After all, an algorithm may fail to produce a satisfactory result. We will return to this issue in the section “Heuristics in computer science” below.

Zairja (10th century)

Heuristic methods were created by medieval polymaths in their attempts to find solutions for the complex problems they faced—science not yet being divorced from what today would appear as theology or astrology. Perhaps the first tangible example of a heuristic based on a mechanical device was using an ancient tool called a zairja , which Arab astrologers employed before the 11th century (Ritchey, 2022 ). It was designed to reconfigure notions into ideas through randomization and resonance and thus to produce answers to questions mechanically (Link, 2010 ). The word zairja may have originated from the Persian combination zaicha-daira , which means horoscope-circle. According to Ibn Khaldoun, ‘zairja is the technique of finding out answers from questions by means of connections existing between the letters of the expressions used in the question; they imagine that these connections can form the basis for knowing the future happenings they want to know’ (Khaldun, 1967 ).

Ramon Llull (1305)

The Majorcan philosopher Ramon Llull (or Raimundus Lullus), who was exposed to the Arabic culture, used the zairja as a starting point for his ars inveniendi veritatem that was meant to complement the ars demonstrandi of medieval Scholastic logic and on which he worked from around 1270–1305 (Link, 2010 ; Llull, 1308 ; Ritchey, 2022 ) when he finished his Ars Generalis Ultima (or Ars Magna ). Llull transformed the astrological and combinatorial components of the zairja into a religious system that took the fundamental ideas of the three Abrahamic faiths of Islam, Christianity, and Judaism and analysed them through symbolic and numeric reasoning. Llull tried to broaden his theory across all fields of knowledge and combine all sciences into a single science that would address all human problems. His thoughts impacted great thinkers, such as Leibniz, and even the modern theory of computation (Fidora and Sierra, 2011 ). Llull’s approach may be considered a clear example of heuristic methods applied to complicated and even theological questions (Hertwig and Pachur, 2015 ).

Joachim Jungius (1622)

Arguably, the German mathematician and philosopher Joachim Jungius was the first to use the terminology heuretica in a call to establish a research society in 1622. Jungius distinguished between three degrees or levels of learning and cognition: empirical, epistemic, and heuristic. Those who have reached the empirical level believe that what they have learned is true because it corresponds to experience. Those who have reached the epistemic level know how to derive their knowledge from principles with rigorous evidence. But those who have reached the highest level, the heuristic level, have a method of solving unsolved problems, finding new theorems, and introducing new methods into science (Ritter et al., 2017 ).

René Descartes (1637)

In 1637, the French philosopher René Descartes published his Discourse on Method (one of the first major works not written in Latin). Descartes argued that humans could utilize mathematical reasoning as a vehicle for progress in knowledge. He proposed four simple steps to follow in problem-solving. First, to accept as true only what is indubitable. Next, divide the problem into as many smaller subproblems as possible and helpful. After that, to conduct one’s thoughts in an orderly fashion, beginning with the simplest and gradually ascending to the most complex. And finally, to make enumerations so complete that one is assured of having omitted nothing (Descartes, 1998 ). In reference to his other methods, Descartes ( 1908 ) started working on the proper heuristic rules to transform every problem, when possible, into algebraic equations, thus creating a mathesis universalis or universal science. In his unfinished ‘Rules for the Direction of the Mind’ or Regulae ad directionem ingenii , Descartes suggested 21 heuristic rules (of planned 36) for scientific research like simplifying the problem, rewriting the problem in geometrical shape, and identifying the knowns and the unknowns. Although Leibniz criticized the rules of Descartes for being too general (Leibniz, 1880 ), this treatise outlined the basis for later work on complex problems in several disciplines.

Gottfried Wilhelm Leibniz (1666)

Influenced by the ideas of Llull, Jungius, and Descartes, the Prussian–German polymath Gottfried Wilhelm Leibniz suggested an original approach to problem-solving in his Dissertatio de Arte Combinatoria , published in Leipzig in 1666. His aim was to create a new universal language into which all problems could be translated and a standard solving procedure that could be applied regardless of the type of the problem. Leibniz also defined an ars inveniendi as a method for finding new truths, distinguishing it from an ars iudicandi , a method to evaluate the validity of alleged truths. Later, in 1673, he invented the calculating machine that could execute all four arithmetic operations and thus find ‘new’ arithmetic truths (Pombo, 2002 ).

Bernard Bolzano ( 1837 )

In 1837, the Czech mathematician and philosopher Bernard Bolzano published his four-volume Wissenschaftslehre (Theory of Science). The fourth part of his theory he called ‘Erfindungskunst’ or the art of invention, mentions in the introductory section 322 that ‘heuristic’ is just the Greek translation. Bolzano explains that the rules he is going to state are not at all entirely new, but instead have always been used ‘by the talented’—although mostly not consciously. He then explains 13 general and 33 special rules one should follow when trying to find new truths. Among the general rules are, e.g., that one should first decide on the question one wants to answer, and the kind of answer one is looking for (section 325), or that one should choose suitable symbols to represent one’s ideas (section 334). Unlike the general rules, the special ones are meant to be helpful for special mental tasks only. E.g., in order to solve the task of finding the reason for any given truth, Bolzano advises first to analyse or dissect the truth into its parts and then use those to form truths which are simpler than the given one (section 378). Another example is Bolzano’s special rule 28, explained in section 386, which is meant to help identify the intention behind a given action. To do so, Bolzano advises exploring the agent’s beliefs about the effects of his action at the time he decided to act, and explains that this will require investigating the agent’s knowledge, his degree of attention and deliberation, any erroneous beliefs the agent may have had, and ‘many other circumstances’. Bolzano continues to point out that any effect the agent may have expected to result from his action will not be an intended one if he considered it neither as an obligation nor as advantageous. While Bolzano’s rules can hardly be considered as ‘shortcuts’, he mentions again and again that they may fail to solve the task at hand adequately (cf. Hertwig and Pachur, 2015 ; Siitonen, 2014 ).

Frank Ramsey ( 1926 )

In Ramsey’s pathbreaking paper on ‘Truth and Probability’ which laid the foundation of subjective probability theory, a final section that has received little attention in the literature is devoted to inductive logic. While he does not use the word ‘heuristic’, he characterizes induction as a ‘habit of the mind,’ explaining that he uses ‘habit in the most general possible sense to mean simply rule or the law of behaviour, including instinct,’ but also including ‘acquired rules.’ Ramsey gives the following pragmatic justification for being convinced by induction: ‘our conviction is reasonable because the world is so constituted that inductive arguments lead on the whole to true opinions,’ and states more generally that ‘we judge mental habits by whether they work, i.e., whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to’ (Ramsey, 1926 ). In modern terminology, Ramsey was pointing out that mental habits—such as inductive inference—may be more or less ‘ecologically rational’.

Karl Duncker ( 1935 )

Karl Duncker was a pioneer in the experimental investigation of human problem-solving. In his 1935 book Zur Psychologie des produktiven Denkens , he discussed both heuristics that help to solve problems, but also hindrances that may block the solution of a problem—and reported on a number of experimental findings. Among the heuristics was a situational analysis with the aim of uncovering the reasons for the gap between the status quo and the problem-solvers goal, analysis of the goal itself, and of sacrifices the problem-solver is willing to make, of prerequisites for the solution, and several others. Among the hindrances to problem-solving was what Duncker called functional fixedness, illustrated by the famous candle problem, in which he asked the participants to fix a candle to the wall and light it without allowing the wax to drip. The available tools were a candle, matches, and a box filled with thumbtacks. The solution was to empty the box of thumbtacks, fix the empty box to the wall using the thumbtacks, put the candle in the box, and finally light the candle. Participants who were given the empty box as a separate item could solve this problem, while those given the box filled with thumbtacks struggled to find a solution. Through this experiment, Duncker illustrated an inability to think outside the box and the difficulty in using a device in a way that is different from the usual one (Glaveanu, 2019 ). Duncker emphasized that success in problem-solving depends on a complementary combination of both the internal mind and the external problem structure (cf. Groner et al., 1983 ).

George Polya ( 1945 )

The Hungarian mathematician George Polya can be aptly called the father of problem-solving in modern mathematics and education. In his 1945 book, How to Solve it , Polya writes that ‘heuristic…or ‘ ars inveniendi ’ was the name of a certain branch of study…often outlined, seldom presented in detail, and as good as forgotten today’ and he attempts to ‘revive heuristic in a modern and modest form’. According to his four principles of mathematical problem-solving, it is first necessary to understand the problem, then plan the execution, carry out the plan, and finally, reflect and search for improvement opportunities. Among the more detailed suggestions for problem-solving explained by Polya are to ask questions such as ‘can you find the solution to a similar problem?’, to use inductive reasoning and analogy, or to choose a suitable notation. Procedures inspired by Polya’s ( 1945 ) book and several later ones (e.g., Induction and Analogy in Mathematics of 1954 ) also informed the field of artificial intelligence (AI) (Hertwig and Pachur, 2015 ).

Johannes Müller (1968)

In 1968, the German scientist Johannes Müller introduced the concept of systematic heuristics while working on his postdoctoral thesis at the Chemnitz University of Technology. Systematic heuristics is a framework for improving the efficiency of intellectual work using problem-solving processes in the fields of science and technology.

The main idea of systematic heuristics is to solve repeated problems with previously validated solutions. These methods are called programmes and are gathered in a library that can be accessed by the main programme, which receives the requirements, prepares the execution plan, determines the required procedures, executes the plan, and finally evaluates the results. Müller’s team was dismissed for ideological reasons, and his programme was terminated after a few years, but his findings went on to be successfully applied in many projects across different industries (Banse and Friedrich, 2000 ).

Imre Lakatos ( 1970 )

In his ‘Methodology of Scientific Research Programmes’ that turned out to be a major contribution to the Popper–Kuhn controversy about the rationality of non-falsifiable paradigms in the natural sciences, Lakatos introduced the interesting distinction between a ‘negative heuristic’ that is given by the ‘hard core’ of a research programme and the ‘positive heuristic’ of the ‘protective belt’. While the latter suggests ways to develop the research programme further and to predict new facts, the ‘hard core’ of the research programme is treated as irrefutable ‘by the methodological decision of its protagonists: anomalies must lead to changes only in the ‘protective’ belt’ of auxiliary hypotheses. The Lakatosian notion of a negative heuristic seems to have received little attention outside of the Philosophy of Science community but may be important elsewhere: when there are too many ways to solve a complicated problem, excluding some of them from consideration may be helpful.

Gerhard Kleining ( 1982 )

The German sociologist Gerhard Kleining suggested a qualitative heuristic as the appropriate research method for qualitative social science. It is based on four principles: (1) open-mindedness of the scientist who should be ready to revise his preconceptions about the topic of study, (2) openness of the topic of study, which is initially defined only provisionally and allowed to be modified in course of the research, (3) maximal variation of the research perspective, and (4) identification of similarities within the data (Kleining, 1982 , 1995 ).

Automatic heuristics: learnt or innate?

Unlike the deliberate, and in some cases quite elaborate, heuristics reviewed above, at least some System 1 heuristics are often applied automatically, without any kind of deliberation or conscious reflection on the task that needs to be performed or the question that needs to be answered. One may view them as mere patterns of behaviour, and as such their scientific examination has been a long cumulative process through different disciplines, even though explicit reference to heuristics was not often made.

Traditionally, examining the behaviour patterns of any living creature, any study concerning thoughts, feelings, or cognitive abilities was regarded as the task of biologists. However, the birth of psychology as a separate discipline paved the way for an alternative outlook. Evolutionary psychology views human behaviour as being shaped through time and experience to promote survival throughout the long history of human struggle with nature. With many factors to consider, scholars have been interested in the evolution of the human brain, patterns of behaviour, and problem-solving (Buss and Kenrick, 1998 ).

Charles Darwin (1873)

Charles Darwin himself maybe qualifies for the title of first evolutionary psychologist, as his perceptions laid the foundations for this field that would continue to grow over a century later (Ghiselin, 1973 ).

In 1873, Darwin claimed that the brain’s articulations regarding expressions and emotions have probably developed similarly to its physical traits (Baumeister and Vohs, 2007 ). He acknowledged that personal demonstrations or expressions have a high capacity for interaction with different peers from the same species. For example, an aggressive look flags an eagerness to battle yet leaves the recipient with the option of retreating without either party being harmed. Additionally, Darwin, as well as his predecessor Lamarck, constantly emphasized the role of environmental factors in ‘the struggle for existence’ that could shape the organism’s traits in response to changes in their corresponding environments (Sen, 2020 ). The famous example of giraffes that grew long necks in response to trees growing taller is an illustration of a major environmental effect. Similarly, cognitive skills, including heuristics, must have also been shaped by the environments to evolve and keep humans surviving and reproducing.

Darwin’s ideas impacted the early advancement of brain science, psychology, and all related disciplines, including the topic of cognitive heuristics (Smulders, 2009 ).

William James (1890)

A few years later, in 1890, the father of American psychology, William James, introduced the notion of evolutionary psychology in his 1200-page text The Principles of Psychology , which later became a reference on the subject and helped establish psychology as a science. In its core content, James reasoned that many actions of the human being demonstrate the activity of instincts, which are the evolutionary embedded inclinations to react to specific incentives in adaptive manners. With this idea, James added an important building block to the foundation of heuristics as a scientific topic.

A simple example of such hard-wired behaviour patterns would be a sneeze, the preprogrammed reaction of convulsive nasal expulsion of air from the lungs through the nose and mouth to remove irritants (Baumeister and Vohs, 2007 ).

Ivan Pavlov (1897)

Triggered by scientific curiosity or the instinct for research, as he called it, the first Russian Nobel laureate, Ivan Pavlov, introduced classical conditioning, which occurs when a stimulus is used that has a predictive relationship with a reinforcer, resulting in a change in response to the stimulus (Schreurs, 1989 ). This learning process was demonstrated through experiments conducted with dogs. In the experiments, a bell (a neutral stimulus) was paired with food (a potent stimulus), resulting ultimately in the dogs salivating at the ringing of the bell—a conditioned response. Pavlov’s experiments remain paradigmatic cases of the emergence of behaviour patterns through association learning.

William McDougall (1909)

At the start of the 20th century, the Anglo-American psychologist William McDougall was one of the first to write about the instinct theory of motivation. McDougall argued that instincts trigger many critical social practices. He viewed instincts as extremely sophisticated faculties in which specific provocations such as social impediments can drive a person’s state of mind in a particular direction, for example, towards a state of hatred, envy, or anger, which in turn may increase the probability of specific practices such as hostility or violence (McDougall, 2015 ).

However, in the early 1920s, McDougall’s perspective about human behaviour being driven by instincts faded remarkably as scientists supporting the concept of behaviourism started to get more attention with original ideas (Buss and Kenrick, 1998 ).

John B. Watson (1913)

The pioneer of the psychological school of behaviourism, John B. Watson, who conducted the controversial ‘Little Albert’ experiment by imposing a phobia on a child to evidence classical conditioning in humans (Harris, 1979 ), argued against the ideas of McDougall, even within public debates (Stephenson, 2003 ). Unlike McDougall, Watson considered the brain an empty page ( tabula rasa as described by Aristotle). According to him. all personality traits and behaviours directly result from the accumulated experience that starts from birth. Thus, the story of the human mind is a continuous writing process featured by surrounding events and factors. This perception was supported in the following years of the 20th century by anthropologists who revealed many very different social standards in different societies, and numerous social researchers argued that the wide variety of cross-cultural differences should lead to the conclusion that there is no mental content built-in from birth, and that all knowledge, therefore, comes from individual experience or perception (Farr, 1996 ). In stark contrast to McDougall, Watson suggested that human intuitions and behaviour patterns are the product of a learning process that starts blank.

B. F. Skinner (1938)

Inspired by the work of Pavlov, the American psychologist B.F. Skinner took the classical conditioning approach to a more advanced level by modifying a key aspect of the process. According to Skinner, human behaviour is dependent on the outcome of past activities. If the outcome is bad, the action will probably not be repeated; however, if the outcome is good, the likelihood of the activity being repeated is relatively high. Skinner called this process reinforcement learning (Schacter et al., 2011 ). Based on reinforcement learning, Skinner also introduced the concept of operant conditioning, a type of associative learning process through which the strength of a behaviour is adjusted by reinforcement or punishment. Considering, for example, a parent’s response to a child’s behaviour, the probability of the child repeating an action will be highly dependent on the parent’s reaction (Zilio, 2013 ). Effectively, Skinner argues that the intuitive System 1 may get edited and that a heuristical cue may become more or less ‘hard-wired’ in the subject’s brain as a stimulus leading to an automatic response.

The DNA and its environment (1953 onwards)

Today, there seems to be wide agreement that behaviour patterns in humans and other species are to some extent ‘in the DNA’, the structure of which was discovered by Francis Crick and James Watson in 1953, but that they also to some extent depend on ‘the environment’—including the social environment in which the agent lives and has problems to solve. Today, it seems safe to say, therefore, that the methods of problem-solving that humans apply are neither completely innate nor completely the result of environmental stimuli—but rather the product of the complex interaction between genes and the environment (Lerner, 1978 ).

Herbert Simon: rationality is bounded

Herbert Simon is well known for his contributions to several fields, including economics, psychology, computer science, and management. Simon proposed a remarkable theory that led him to be awarded the Nobel Prize for Economics in 1978.

Bounded rationality and satisficing

In the mid-1950s, Simon published A Behavioural Model of Rational Choice, which focused on bounded rationality: the idea that people must make decisions with limited time, mental resources, and information (Simon, 1955 ). He clearly states the triangle of limitations in every decision-making process—the availability of information, time, and cognitive ability (Bazerman and Moore, 1994 ). The ideas of Simon are considered an inspiring foundation for many technologies in use today.

Instead of conforming to the idea that economic behaviour can be seen as rational and dependent on all accessible data (i.e., as optimization), Simon suggested that the dynamics of decision-making were essentially ‘satisficing,’ a notion synthesized from ‘satisfy’ and ‘suffice’ (Byron, 1998 ). During the 1940s, scholars noticed the frequent failure of two assumptions required for ‘rational’ decision-making. The first is that data is never enough and may be far from perfect, while people dependably make decisions based on incomplete data. Second, people do not assess every feasible option before settling on a decision. This conduct is highly correlated with the cost of data collection since data turns out to be progressively harder and costlier to accumulate. Rather than trying to find the ideal option, people choose the first acceptable or satisfactory option they find. Simon described this procedure as satisficing and concluded that the human brain in the decision-making process would, at best, exhibit restricted abilities (Barros, 2010 ).

Since people can neither obtain nor process all the data needed to make a completely rational decision, they use the limited data they possess to determine an outcome that is ‘good enough’—a procedure later refined into the take-the-best heuristic. Simon’s view that people are bounded by their cognitive limits is usually known as the theory of bounded rationality (cf. Gigerenzer and Selten, 2001 ).

Herbert Simon and AI

With the cooperation of Allen Newell of the RAND Corporation, Simon attempted to create a computer simulator for human decision-making. In 1956, they created a ‘thinking’ machine called the ‘Logic Theorist’. This early smart device was a computer programme with the ability to prove theorems in symbolic logic. It was perhaps the first man-made programme that simulated some human reasoning abilities to solve actual problems (Gugerty, 2006 ). After a few years, Simon, Newell, and J.C. Shaw proposed the General Problem Solver or GPS, the first AI-based programme ever invented. They actually aimed to create a single programme that could solve all problems with the same unified algorithm. However, while the GPS was efficient with sufficiently well-structured problems like the Towers of Hanoi (a puzzle with 3 rods and different-sized disks to be moved), it could not solve real-life scenarios with all their complexities (A. Newell et al., 1959 ).

By 1965, Simon was confident that ‘machines will be capable of doing any work a man can do’ (Vardi, 2012 ). Therefore, Simon dedicated most of the remainder of his career to the advancement of machine intelligence. The results of his experiments showed that, like humans, certain computer programmes make decisions using trial-and-error and shortcut methods (Frantz, 2003 ). Quite explicitly, Simon and Newell ( 1958 , p. 7) referred to heuristics being used by both humans and intelligent machines: ‘Digital computers can perform certain heuristic problem-solving tasks for which no algorithms are available… In doing so, they use processes that are closely parallel to human problem-solving processes’.

Additionally, the importance of the environment was also clearly observed in Newell and Simon’s ( 1972 ) work:

‘Just as scissors cannot cut paper without two blades, a theory of thinking and problem-solving cannot predict behaviour unless it encompasses both an analysis of the structure of task environments and an analysis of the limits of rational adaptation to task requirements’ (p. 55).

Accordingly, the term ‘task environment’ describes the formal structure of the universe of choices and results for a specific problem. At the same time, Newell and Simon do not treat the agent and the environment as two isolated entities, but rather as highly related. Consequently, they tend to believe that agents with different cognitive abilities and choice repertoires will inhabit different task environments even though their physical surroundings and intentions might be the same (Agre and Horswill, 1997 ).

Heuristics in computer science

Computer science as a discipline may have the biggest share of deliberately applied heuristics. As heuristic problem-solving has often been contrasted with algorithmic problem-solving—even by Simon and Newell ( 1958 )—it is worth recalling that the very notion of ‘algorithm’ was clarified only in the first half of the 20th century, when Alan Turing ( 1937 ) defined what was later named ‘Turing-machine’. Basically, he defined ‘mechanical’ computation as a computation that can be done by a—stylized—machine. ‘Mechanical’ being what is also known today as algorithmic, one can say that any procedure that can be performed by a digital computer is algorithmic. Nevertheless, many of them are also heuristics because an algorithm may fail to produce an optimal solution to the problem it is meant to solve. This may be so either because the problem is ill-defined or because the computations required to produce the optimal solution may not be feasible with the available resources. If the problem is ill-defined—as it often is, e.g., in natural language processing—the algorithm that does the processing has to rely on a well-defined model that does not capture the vagueness and ambiguities of the real-life problem—a problem typically stated in natural language. If the problem is well-defined, but finding the optimal solution is not feasible, algorithms that would find it may exist ‘in principle’, but require too much time or memory to be practically implemented.

In fact, there is today a rich theory of complexity classes that distinguishes between types of (well-defined) problems according to how fast the time or memory space required to find the optimal solution increases with increasing problem size. E.g., for problem types of the complexity class P, any deterministic algorithm that produces the optimal solution has a running time bounded by a polynomial function of the input size, whereas, for problems of complexity class EXPTIME, the running time is bounded by an exponential function of the input size. In the jargon of computer science, problems of the latter class are considered intractable, although the input size has to become sufficiently large before the computation of the optimal solution becomes practically infeasible (cf. Harel, 2000 ; Hopcroft et al., 2007 ). Research indicates that the computational complexity of problems can also reduce the quality of human decision-making (Bossaerts and Murawski, 2017 ).

Shortest path algorithms

A classic optimization problem that may serve to illustrate the issues of optimal solution, complexity, and heuristics goes by the name of the travelling salesman problem (TSP), which was first introduced in 1930. In this problem, several cities with given distances between each two are considered, and the goal is to find the shortest possible path through all cities and return to the starting point. For a small input size, i.e., for a small number of cities, the ‘brute-force’ algorithm is easy to use: write down all the possible paths through all the cities, calculate their lengths, and choose the shortest. However, the number of steps that are required by this procedure quickly increases with the number of cities. The TSP is today known to belong to the complexity class NP which is in between P and EXPTIME Footnote 1 ). To solve the TSP, Jon Bentley ( 1982 ) proposed the greedy (or nearest-neighbour) algorithm that will yield an acceptable result, but not necessarily the optimal one, within a relatively short time. This approach always picks the nearest neighbour as the next city to visit without regard to possible later non-optimal steps. Hence, it is considered a good-enough solution with fast results. Bentley argued that there may be better solutions, but that it approximates the optimal solution. Many other heuristic algorithms have been explored later on. There is no assurance that the solution found by a heuristic algorithm will be an ideal answer for the given problem, but it is acceptable and adequate (Pearl, 1984 ).

Heuristic algorithms of the shortest path are utilized nowadays by GPS frameworks and self-driving vehicles to choose the best route from any point of departure to any destination (for example, A* Search Algorithm). Further developed algorithms can also consider additional elements, including traffic, speed limits, and quality of roads, they may yield the shortest routes in terms of distance and the fastest ones in terms of driving time.

Computer chess

While the TSP consists of a whole set of problems which differ by the number of cities and the distances between them, determining the optimal strategy for chess is just one problem of a given size. The rules of chess make it a finite game, and Ernst Zermelo proved in 1913 that it is ‘determined’: if it were played between perfectly rational players, it would always end with the same outcome: either White always wins, or Black always wins, or it always ends with a draw (Zermelo, 1913 ). Up to the present day, it is not known which of the three is true, which points to the fact that a brute-force algorithm that would go through all possible plays of chess is practically infeasible: it would have to explore too many potential moves, and the required memory would quickly run out of space (Schaeffer et al., 2007 ). Inevitably, a chess-playing machine has to use algorithms that are ‘shortcuts’—which can be more or less intelligent.

While Simon and Newell had predicted in 1958 that within ten years the world chess champion would be a computer, it took until 1997, when a chess-playing machine developed by IBM under the name Deep Blue defeated grandmaster Garry Kasparov. Although able to analyse millions of possibilities due to their computing powers, today’s chess-playing machines apply a heuristic approach to eliminate unlikely moves and focus on those with a high probability of defeating their opponent (Newborn, 1997 ).

Machine learning

One of the main features of machine learning is the ability of the model to predict a future outcome based on past data points. Machine learning algorithms build a knowledge base similar to human experience from previous experiences in the dataset provided. From this knowledge base, the model can derive educated guesses.

A good demonstration of this is the card game Top Trumps in which the model can learn to play and keep improving to dominate the game. It does so by undertaking a learning path through a sequence of steps in which it picks two random cards from the deck and then analyses and compares them with random criteria. According to the winning result, the model iteratively updates its knowledge base in the same manner as a human, following the rule that ‘practice makes perfect.’ Hence the model will play, collect statistics, update, and iterate while becoming more accurate with each increment (Volz et al., 2016 ).

Natural language processing

In the world of language understanding, current technologies are far from perfect. However, models are becoming more reliable by the minute. When analysing and dissecting a search phrase entered into the Google search engine, a background model tries to make sense of the search criteria. Stemming words, context analysis, the affiliation of phrases, previous searches, and autocorrect/autocomplete can be applied in a heuristic algorithm to display the most relevant result in less than a second. Heuristic methods can be utilized when creating certain algorithms to understand what the user is trying to express when searching for a phrase. For example, using word affiliation, an algorithm tries to narrow down the meaning of words as much as possible toward the user’s intention, particularly when a word has more than one meaning but changes with the context. Therefore, a search for apple pie allows the algorithm to deduce that the user is highly interested in recipes and not in the technology company (Sullivan, 2002 ).

Search and big data

Search is a good example to appreciate the value of time, as one of the most important criteria is retrieving acceptable results in an acceptable timeframe. In a full search algorithm, especially in large datasets, retrieving optimal results can take a massive amount of time, making it necessary to apply heuristic search.

Heuristic search is a type of search algorithm that is used to find solutions to problems in a faster way than an exhaustive search. It uses specific criteria to guide the search process and focuses on more favourable areas of the search space. This can greatly reduce the number of nodes required to find a solution, especially for large or complex search trees.

Heuristic search algorithms work by evaluating the possible paths or states in a search tree and selecting the better ones to explore further. They use a heuristic function, which is a measure of how close a given state is to the goal state, to guide the search. This allows the algorithm to prioritize certain paths or states over others and avoid exploring areas of the search space that are unlikely to lead to a solution. The reached solution is not necessarily the best, however, a ‘good enough’ one is found within a ‘fast enough’ time. This technique is an example of a trade-off between optimality and speed (Russell et al., 2010 ).

Today, there is a rich literature on heuristic methods in computer science (Martí et al., 2018 ). As the problem to be solved may be the choice of a suitable heuristic algorithm, there are also meta-heuristics that have been explored (Glover and Kochenberger, 2003 ), and even hyper-heuristics which may serve to find or generate a suitable meta-heuristic (Burke et al., 2003 ). As Sörensen et al. ( 2018 ) point out, the term ‘metaheuristic’ may refer either to an ‘algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms’—or to a specific algorithm that is based on such a framework. E.g., a metaheuristic to find a suitable search algorithm may be inspired by the framework of biological evolution and use its ideas of mutation, reproduction and selection to produce a particular search algorithm. While this algorithm will still be a heuristic one, the fact that it has been generated by an evolutionary process indicates its superiority over alternatives that have been eliminated in the course of that process (cf. Vikhar, 2016 ).

Daniel Kahneman and Amos Tversky: heuristics and biases

Inspired by the concepts of Herbert Simon, psychologists Daniel Kahneman and Amos Tversky initiated the heuristics and biases research programme in the early 1970s, which emphasized how individuals make judgements and the conditions under which those judgements may be inaccurate (Kahneman and Klein, 2009 ).

In addition, Kahneman and Tversky emphasized information processing to elaborate on how real people with limitations can decide, choose, or estimate (Kahneman, 2011 ).

The remarkable article Judgement under Uncertainty: Heuristics and Biases , published in 1974, is considered the turning key that opened the door wide to research on this topic, although it was and still is considered controversial (Kahneman, 2011 ). In their research, Kahneman and Tversky identified three types of heuristics by which probabilities are often assessed: availability, representativeness, and anchoring and adjustment. In passing, Kahneman and Tversky mention that other heuristics are used to form non-probabilistic judgements; for example, the distance of an object may be assessed according to the clarity with which it is seen. Other researchers subsequently introduced different types of heuristics. However, availability, representativeness, and anchoring are still considered fundamental heuristics for judgements under uncertainty.

Availability

According to the psychological definition, availability or accessibility is the ease with which a specific thought comes to mind or can be inferred. Many people use this type of heuristic when judging the probability of an event that may have happened or will happen in the future. Hence, people tend to overestimate the likelihood of a rare event if it easily comes to mind because it is frequently mentioned in daily discussions (Kahneman, 2011 ). For instance, individuals overestimate their probability of being victims of a terrorist attack while the real probability is negligible. However, since terrorist attacks are highly available in the media, the feeling of a personal threat from such an attack will also be highly available during our daily life (Kahneman, 2011 ).

This concept is also present in business, as we remember the successful start-ups whose founders quit college for their dreams, such as Steve Jobs and Mark Zuckerberg, and ignore the thousands of ideas, start-ups, and founders that failed. This is because successful companies are considered a hot topic and receive broad coverage in the media, while failures do not. Similarly, broad media coverage is known to create top-of-mind awareness (TOMA) (Farris et al., 2010 ). Moreover, the availability type of heuristics was offered as a clarification for fanciful connections or irrelevant correlations in which individuals wrongly judge two events to be related to each other when they are not. Tversky and Kahneman clarified that individuals judge relationships based on the ease of envisioning the two events together (Tversky and Kahneman, 1973 ).

Representativeness

The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or prototype representing this category (Tversky and Kahneman, 1974 ). Conceptually, this heuristic can be decomposed into three parts. The first one is that the ideal case or prototype of the category is considered representative of the group. The second part judges the similarity between the object and the representative prototype. The third part is that a high degree of similarity indicates a high probability that the object belongs to the category, and a low degree of similarity indicates a low probability.

While the heuristic is often applied automatically within an instant and may be compelling in many cases, Tversky and Kahneman point out that the third part of the heuristic will often lead to serious errors or, at any rate, biases.

In particular, the representativeness heuristic can give rise to what is known as the base rate fallacy. As an example, Tversky and Kahneman consider an individual named Steve, who is described as shy, withdrawn, and somewhat pedantic, and report that people who have to assess, based on this description, whether Steve is more likely to be a librarian or a farmer, invariably consider it more likely that he is a librarian—ignoring the fact that there are many more farmers than librarians, the fact that an estimate of the probability that Steve is a librarian or a farmer, respectively, must take into account.

Another example is that a taxicab was engaged in an accident. The data indicates that 85% of the taxicabs are green and 15% blue. An eyewitness claims that the involved cab was blue. The court then evaluates the witness for reliability because he is 80% accurate and 20% inaccurate. So now, what would be the probability of the involved cab being blue, given that the witness identified it as blue as well?

To evaluate this case correctly, people should consider the base rate, 15% of the cabs being blue, and the witness accuracy rate, 80%. Of course, if the number of cabs is equally split between colours, then the only factor in deciding is the reliability of the witness, which is an 80% probability.

However, regardless of the colours’ distribution, most participants would select 80% to respond to this enquiry. Even participants who wanted to take the base rate into account estimated a probability of more than 50%, while the right answer is 41% using the Bayesian inference (Kahneman, 2011 ).

In relation to the representativeness heuristic, Kahnemann ( 2011 ) illustrated the ‘conjunction fallacy’ in the following example: based only on a detailed description of a character named Linda, doctoral students in the decision science programme of the Stanford Graduate School of Business, all of whom had taken several advanced courses in probability, statistics, and decision theory, were asked to rank various other descriptions of Linda according to their probability. Even Kahneman and Tversky were surprised to find that 85% of the students ranked Linda as a bank teller active in the feminist movement as more likely than Linda as a bank teller.

From these and many other examples, one must conclude that even sophisticated humans use the representativeness heuristic to make probability judgements without referring to what they know about probability.

Representativeness is used to make probability judgements and judgements about causality. The similarity of A and B neither indicates that A causes B nor that B causes A. Nevertheless, if A precedes B and is similar to B, it is often judged to be B’s cause.

Adjustment and anchoring

Based on Tversky and Kahneman’s interpretations, the anchor is the first available number introduced in a question forming the centre of a circle whose radius (up or down) is an acceptable range within which lies the best answer (Baron, 2000 ). This is used and tested in several academic and real-world scenarios and in business negotiations where parties anchor their prices to formulate the range of acceptance through which they can close the deal, deriving the ceiling and floor from the anchor. The impact is more dominant when parties lack time to analyse actions thoroughly.

Significantly, even if the anchor is way beyond logical boundaries, it can still bias the estimated numbers by all parties without them even realizing that it does (Englich et al., 2006 ).

In one of their experiments, Tversky and Kahneman ( 1974 ) asked participants to quickly calculate the product of numbers from 1 to 8 and others to do so from 8 to 1. Since the time was limited to 5 min, they needed to make a guess. The group that started from 1 had an average of 512, while the group that started from 8 had an average of 2250. The right answer was 40,320.

Perhaps this is one of the most unclear cognitive heuristics introduced by Kahneman and Tversky that can be interchangeably considered as a bias instead of a heuristic. The problem is that the mind tends to fixate on the anchor and adjust according to it, whether it was introduced implicitly or explicitly. Some scholars even believe that such bias/heuristic is unavoidable. For instance, in one study, participants were asked if they believed that Mahatma Gandhi died before or after nine years old versus before or after 140 years old. Unquestionably, these anchors were considered unrealistic by the audience. However, when the participants were later asked to give their estimate of Gandhi’s age of death, the group which was anchored to 9 years old speculated the average age to be 50, while the group anchored to the highest value estimated the age of death to be as high as 67 (Strack and Mussweiler, 1997 ).

Gerd Gigerenzer: fast-and-frugal heuristics

The German psychologist Gerd Gigerenzer is one of the most influential figures in the field of decision-making, with a particular emphasis on the use of heuristics. He has built much of his research on the theories of Herbert Simon and considers that Simon’s theory of bounded rationality was unfinished (Gigerenzer, 2015 ). As for Kahneman and Tversky’s work, Gigerenzer has a different approach and challenges their ideas with various arguments, facts, and numbers.

Gigerenzer explores how people make sense of their reality with constrained time and data. Since the world around us is highly uncertain, complex, and volatile, he suggests that probability theory cannot stand as the ultimate concept and is incapable of interpreting everything, particularly when probabilities are unknown. Instead, people tend to use the effortless approach of heuristics. Gigerenzer introduced the concept of the adaptive toolbox, which is a collection of mental shortcuts that a person or group of people can choose from to solve a current problem (Gigerenzer, 2000 ). A heuristic is considered ecologically rational if adjusted to the surrounding ecosystem (Gigerenzer, 2015 ).

A daring argument of Gigerenzer, which very much opposes the heuristics and biases approach of Kahneman and Tversky, is that heuristics cannot be considered irrational or inferior to a solution by optimization or probability calculation. He explicitly argues that heuristics are not gambling shortcuts that are faster but riskier (Gigerenzer, 2008 ), but points to several situations where less is more, meaning that results from frugal heuristics, which neglect some data, were nevertheless more accurate than results achieved by seemingly more elaborate multiple regression or Bayesian methods that try to incorporate all relevant data. While researchers consider this counterintuitive since a basic rule in research seems to be that more data is always better than less, Gigerenzer points out that the less-is-more effect (abbreviated as LIME) could be confirmed by computer simulations. Without denying that in some situations, the effect of using heuristics may be biased (Gigerenzer and Todd, 1999 ), Gigerenzer emphasizes that fast-and-frugal heuristics are basic, task-oriented choice systems that are a part of the decision-maker’s toolbox, the available collection of cognitive techniques for decision-making (Goldstein and Gigerenzer, 2002 ).

Heuristics are considered economical because they are easy to execute, seek limited data, and do not include many calculations. Contrary to most traditional decision-making models followed in the social and behavioural sciences, models of fast-and-frugal heuristics portray not just the result of the process but also the process itself. They comprise three simple building blocks: the search rule that specifies how information is searched for, the stopping rule that specifies when the information search will be stopped, and finally, the decision rule that specifies how the processed information is integrated into a decision (Goldstein and Gigerenzer, 2002 ).

Rather than characterizing heuristics as rules of thumb or mental shortcuts that can cause biases and must therefore be regarded as irrational, Gigerenzer and his co-workers emphasize that fast-and-frugal heuristics are often ecologically rational, even if the conjunction of them may not even be logically consistent (Gigerenzer and Todd, 1999 ).

According to Goldstein and Gigerenzer ( 2002 ), a decision maker’s pool of mental techniques may contain logic and probability theory, but it also embraces a set of simple heuristics. It is compared to a toolbox because just as a wood saw is perfect for cutting wood but useless for cutting glass or hammering a nail into a wall, the ingredients of the adaptive toolbox are intended to tackle specific scenarios.

For instance, there are specific heuristics for choice tasks, estimation tasks, and categorization tasks. In what follows, we will discuss two well-known examples of fast-and-frugal heuristics: the recognition heuristic (RH), which utilizes the absence of data, and the take-the-best heuristic (TTB), which purposely disregards the data.

Both examples of heuristics can be connected to decision assignments and to circumstances in which a decision-maker needs to decide which of two options has a higher reward on a quantitative scale.

Ideal scenarios would be deducing which one of two stock shares will have a better income in the next month, which of two cars is more convenient for a family, or who is a better candidate for a particular job (Goldstein and Gigerenzer, 2002 ).

The recognition heuristic

The recognition heuristic has been examined broadly with the famous experiment to determine which of the two cities has a higher population. This experiment was conducted in 2002, and the participants were undergraduate students: one group in the USA and one in Germany. The question was as follows: which has more occupants—San Diego or San Antonio? Given the cultural difference between the student groups and the level of information regarding American cities, it could be expected that American students would have a higher accuracy rate than their German peers. However, most German students did not even know that San Antonio is an American city (Goldstein and Gigerenzer, 2002 ). Surprisingly, the examiners, Goldstein and Gigerenzer, found the opposite of what was expected. 100% of the German students got the correct answer, while the American students achieved an accuracy rate of around 66%. Remarkably, the German students who had never known about San Antonio had more correct answers. Their lack of knowledge empowered them to utilize the recognition heuristic, which states that if one of two objects is recognized and the other is not, then infer that the recognized object has the higher value concerning the relevant criterion. The American students could not use the recognition heuristic because they were familiar with both cities. Ironically, they knew too much.

The recognition heuristic is an incredible asset. In many cases, it is used for swift decisions since recognition is usually systematic and not arbitrary. Useful applications may be cities’ populations, players’ performance in major leagues, or writers’ level of productivity. However, this heuristic will be less efficient in more difficult scenarios than a city’s population, such as the age of the city’s mayor or its sea-level altitude (Gigerenzer and Todd, 1999 ).

Take-the-best heuristic

When the recognition heuristic is not efficient because the decision-maker has enough information about both options, another important heuristic can be used that relies on hints or cues to arrive at a decision. The take-the-best (TTB) heuristic is a heuristic that relies only on specific cues or signals and does not require any complex calculations. In practice, it often boils down to a one-reason decision rule, a type of heuristic where judgements are based on a single good reason only, ignoring other cues (Gigerenzer and Gaissmaier, 2011 ). According to the TTB heuristic, a decision-maker evaluates the case by selecting the attributes which are important to him and sorts these cues by importance to create a hierarchy for the decision to be taken. Then alternatives are compared according to the first, i.e., the most important, cue; if an alternative is the best according to the first cue, the decision is taken. Otherwise, the decision-maker moves to the next layer and checks that level of cues. In other words, the decision is based on the most important attribute that allows one to discriminate between the alternatives (Gigerenzer and Goldstein, 1996 ). Although this lexicographic preference ordering is well known from traditional economic theory, it appears there mainly to provide a counterexample to the existence of a real-valued utility function (Debreu, 1959 ). Surprisingly, however, it seems to be used in many critical situations. For example, in many airports, the customs officials may decide if a traveller is chosen for a further check by looking only at the most important attributes, such as the city of departure, nationality, or luggage weight (Pachur and Marinello, 2013 ). Moreover, in 2012, a study explored voters’ views of how US presidential competitors would deal with the single issue that voters viewed as most significant, for example, the state of the economy or foreign policy. A model dependent on this attribute picked the winner in most cases (Graefe and Armstrong, 2012 ).

However, the TTB heuristic has a stopping rule applied when the search reaches a discriminating cue. So, if the most important signal discriminates, there is no need to continue searching for other cues, and only one signal is considered. Otherwise, the next most important signal will be considered. If no discriminating signal is found, the heuristic will need to make a random guess (Gigerenzer and Gaissmaier, 2011 ).

Empirical evidence on fast-and-frugal heuristics

More studies have been conducted on fast-and-frugal heuristics using analytical methods and simulations to investigate when and why heuristics yield accurate results on the one hand, and on the other hand, using experiments and observational methods to find out whether and when people use fast-and-frugal heuristics (Luan et al., 2019 ). Structured examinations and benchmarking with standard models, for example, regression or Bayesian models, have shown that the accuracy of fast-and-frugal heuristics relies upon the structure of the information environment (e.g., the distribution of signal validities, the interrelation between signals, etc.). In numerous situations, fast-and-frugal heuristics can perform well, particularly in generalized contexts, when making predictions for new cases that have not been previously experienced. Empirical examinations show that people utilize fast-and-frugal heuristics under a time constraint when data is hard to obtain or must be retrieved from memory. Remarkably, some studies have inspected how individuals adjust to various situations by learning. Rieskamp and Otto ( 2006 ) found that individuals seemingly learn to choose the heuristic that has the best performance in a specific domain. Nevertheless, Reimer and Katsikopoulos ( 2004 ) found that individuals apply fast-and-frugal heuristics when making inferences in groups.

While interest in heuristics has been increasing, some of the literature has been mostly critical. In particular, the heuristics and biases programme introduced by Kahneman and Tversky has been the target of more than one critique (Reisberg, 2013 ).

The arguments are mainly in two directions. The first is that the main focus is on the coherence standards such as rationality and that the detection of biases ignores the context-environmental factors where the judgements occur (B.R. Newell, 2013 ). The second is that notions such as availability or representativeness are vague and undefined, and state little regarding the procedures’ hidden judgements (Gigerenzer, 1996 ). For example, it has been argued that the replies in the acclaimed Linda-the-bank-teller experiment could be considered sensible instead of biased if one uses conversational or colloquial standards instead of formal probability theory (Hilton, 1995 ).

The argument of having a vague explanation for certain phenomena can be illustrated when considering the following two scenarios. People tend to believe that an opposite outcome will be achieved after having a stream of the same outcome (e.g., people tend to believe that ‘heads’ should be the next outcome in a coin-flipping game with many consecutive ‘tails’). This is called the gambler fallacy (Barron and Leider, 2010 ). By contrast, the hot-hand fallacy (Gilovich et al., 1985 ) argues that people tend to believe that a stream of the same outcome will continue when there is a lucky day (e.g., a player is taking a shot in a sport such as a basketball after a series of successful attempts). Ayton and Fisher ( 2004 ) argued that, although these two practices are quite opposite, they have both been classified under the heuristic of representativeness. In the two cases, a flawed idea of random events drives observers to anticipate that a certain stream of results is representative of the whole procedure. In the first scenario of coin flipping, people tend to believe that a long stream of tails should not occur; hence the head is predicted. While in the case of the sports player, the stream of the same outcome is expected to continue (Gilovich et al., 1985 ). Therefore, representativeness cannot be diagnosed without considering in advance the expected results. Also, the heuristic does not clarify why people have the urge to believe that a stream of random events should have a representative, while in real life, it does not (Ayton and Fischer, 2004 ).

Nevertheless, the most common critique of Kahneman and Tversky is the idea that ‘we cannot be that dumb’. It states that the heuristics and biases programme is overly pessimistic when assessing the average human decision-making. Also, humans collectively have accumulated many achievements and discoveries throughout human history that would not have been possible if their ability to adequate decision-making had been so limited (Gilovich and Griffin, 2002 ).

Similarly, the probabilistic mental models (PMM) theory of human inference inspired by Simon and pioneered by Gigerenzer has also been exposed to criticism (B.R. Newell et al., 2003 ). Indeed, the enticing character of heuristics that they are both easy to apply and efficient has made them famous within different domains. However, it has also made them vulnerable to replications or variations of the experiments that challenge the original results. For example, Daniel Oppenheimer ( 2003 ) argues that the recognition heuristic (RH) could not yield satisfactory results after replicating the experiment of city populations. He claims that the participants’ judgements failed to obey the RH not just when there were cues other and stronger than mere recognition but also in circumstances where recognition would have been the best cue available. In any case, one could claim that there are numerous methods in the adaptive toolbox and that under certain conditions, people may prefer to use heuristics other than the RH. However, this statement is also questionable since many heuristics that are thought to exist in the adaptive toolbox acknowledge the RH as an initial step (Gigerenzer and Todd, 1999 ). Hence, if individuals are not using the RH, they cannot use many of the other heuristics in the adaptive toolbox (Oppenheimer, 2003 ). Likewise, Newell et al. ( 2003 ) question whether the fast-and-frugal heuristics accurately explain actual human behaviour. In two experiments, they challenged the take-the-best (TTB) heuristic, as it is considered a building block in the PMM framework. The outcomes of these experiments, together with others, such as those of Jones et al. ( 2000 ) and Bröder ( 2000 ), show that the TTB heuristic is not a reliable approach even within circumstances favouring its use. In a somewhat heated debate published in the Psychological Review 1996, Gigerenzer’s criticism of Kahneman and Tversky that many of the so-called biases ‘disappear’ if frequencies rather than probabilities are assumed, was countered by Kahneman and Tversky ( 1996 ) by means of a detailed re-examination of the conjunction fallacy (or Linda Problem). Gigerenzer ( 1996 ) remained unconvinced, and was in turn, blamed by Kahneman and Tversky ( 1996 , p. 591) for just reiterating ‘his objections … without answering our main arguments’.

Our historical review has revealed a number of issues that have received little attention in the literature.

Deliberate vs. automatic heuristics

We have differentiated between deliberate and automatic heuristics, which often seem to be confused in the literature. While it is a widely shared view today that the human brain often relies heavily on the fast and effortless ‘System 1’ in decision-making, but can also use the more demanding tools of ‘System 2’, and it has been acknowledged, e.g. by Kahneman ( 2011 , p. 98), that some heuristics belong to System 1 and others to System 2, the two systems are not as clearly distinct as it may seem. In fact, the very wide range of what one may call ‘heuristics’ shows that there is a whole spectrum of fallible decision-making procedures—ranging from the probably innate problem-solving strategy of the baby that cries whenever it is hungry or has some other problem, to the most elaborate and sophisticated procedures of, e.g., Polya, Bolzano, or contemporary chess-engines. One may be tempted to characterize instinctive procedures as subconscious and sophisticated ones as conscious, but a deliberate heuristic can very well become a subconsciously applied ‘habit of the mind’ or learnt routine with experience and repetition. Vice versa, automatic, subconscious heuristics can well be raised to consciousness and be applied deliberately. E.g., the ‘inductive inference’ from tasty strawberries to the assumption that all red berries are sweet and edible may be quite automatic and subconscious in little children, but the philosophical literature on induction shows that it can be elaborated into something quite conscious. However, while the notion of consciousness may be crucial for an adequate understanding of heuristics in human cognition, for the time being, it seems to remain a philosophical mystery (Harley, 2021 ; Searle, 1997 ), and once programmed, sophisticated heuristic algorithms can be executed by automata.

The deliberate heuristics that we reviewed also illustrate that some of them can hardly be called ‘simple’, ‘shortcuts’, or ‘rules of thumb’. E.g., the heuristics of Descartes, Bolzano, or Polya each consist of a structured set of suggestions, and, e.g., ‘to devise a plan’ for a mathematical proof is certainly not a shortcut. Llull ( 1308 , p. 329), to take another example, wrote of his ‘ars magna’ that ‘the best kind of intellect can learn it in two months: one month for theory and another month for practice’.

Heuristics vs. algorithms

Our review of heuristics also allowed us to clarify the distinction between heuristics and algorithms. As evidenced by our glimpse at computer science, there are procedures that are quite obviously both an algorithm and a heuristic. Within computer science, they are in fact quite common. Algorithms of the heuristic type may be required for certain problems even though an algorithm that finds the optimal solution exists ‘in principle’—as in the case of determining the optimal strategy in chess, where the brute-force-method to enumerate all possible plays of chess is just not practically feasible. In other cases, heuristic algorithms are used because an exhaustive search, while practically feasible, would be too costly or time-consuming. Clearly, for many problems, there are also problem-solving algorithms which always do produce the optimal solution in a reasonable time frame. Given our definition of a heuristic as a fallible method, algorithms of this kind are counterexamples to the complaint that the notion has become so wide that ‘any procedure can be called a heuristic’. However, as we have seen, there are also heuristic procedures that are non-algorithmic. These may be necessary either because the problem to be solved is not sufficiently well-defined to allow for an algorithm, or because an algorithm that would solve the problem at hand, is not known or does not exist. Kleining’s qualitative heuristics is an example of non-algorithmic heuristics necessitated by the ill-defined problems of research in the social sciences, while Polya’s heuristic for solving mathematical problems is an example of the latter: an algorithm that would allow one to decide if a given mathematical conjecture is a theorem or not does not exist (cf. Davis, 1965 ).

Pre-SEU vs. post-SEU heuristics

As we noted in the introduction, the emergence of the SEU theory can be regarded as a kind of watershed for the research on heuristics, as it came to be regarded as the standard definition of rational choice. Post-SEU, fallible methods of decision-making would have to face comparison with this standard. Gigerenzer’s almost belligerent criticism of SEU shows that even today it seems difficult to discuss the pros and cons of heuristics unless one relates them to the backdrop of SEU. However, his criticism of SEU is mostly en passant and seems to assume that the SEU model requires ‘known probabilities’ (e.g., Gigerenzer, 2021 ), ignoring the fact that it is, in general, subjective probabilities, as derived from the agent’s preferences among lotteries, that the model relies on (cf. e.g., Jeffrey, 1967 or Gilboa, 2011 ). In fact, when applied to an ill-defined decision problem in, e.g., management, the SEU theory may well be regarded as a heuristic—it asks you to consider the possible consequences of the relevant set of actions, your preferences among those consequences, and the likelihood of those consequences. To the extent that one may get all of these elements wrong, SEU is a fallible method of decision-making. To be sure, it is not a fast and effortless heuristic, but our historical review of pre-SEU heuristics has illustrated that heuristics may be quite elaborate and require considerable effort and attention.

It is quite true, of course, that the SEU heuristic will hardly be helpful in problem-solving that is not ‘just’ decision-making. If, e.g., the problem to be solved is to find a proof for a mathematical conjecture, the set of possible actions will in general be too vast to be practically contemplated, let alone evaluated according to preferences and probabilities.

Positive vs. negative heuristics

To the extent that the study of heuristics aims at understanding how decisions are actually made, it is not only positive heuristics that need to be considered. It will also be required to investigate the conditions that may prevent the agent from adopting certain courses of action. As we saw, Lakatos used the notion of negative heuristics quite explicitly to characterize research programmes, but we also briefly review Duncker’s notion of ‘functional fixedness’ as an example of a hindrance to adequate problem-solving. A systematic study of such negative heuristics seems to be missing in the literature and we believe that it may be a helpful complement to the study of positive heuristics which has dominated the literature that we reviewed.

To the extent that heuristics are studied with the normative aim of identifying effective heuristics, it may also be useful to consider approaches that should not be taken. ‘Do not try to optimize!’ might be a negative heuristic favoured by the fast-and-frugal school of thought.

Heuristics as the product of evolution

Clearly, heuristics have always existed throughout the development of human knowledge due to the ‘old mind’s’ evolutionary roots and the frequent necessity to apply fast and sufficiently reliable behaviour patterns. However, unlike the behaviour patterns in the other animals, the methods used by humans in problem-solving are sufficiently diverse that the dual-process theory was suggested to provide some structure to the rich ‘toolbox’ humans can and do apply. As all our human DNA is the product of evolution, it is not only the intuitive inclinations to react to certain stimuli in a particular way that must be seen as the product of evolution, but also our ability to abstain from following our gut feelings when there is reason to do so, to reflect and analyse the situation before we embark on a particular course of action. Quite frequently, we experience a tension between our intuitive inclinations and our analytic mind’s judgement, but both of them are somehow the product of evolution, our biography, and the environment. Thus, to point out that gut feelings are an evolved capacity of the brain does in no way provide an argument that would support their superiority over the reflective mind.

Moreover, compared to the speed of problem change in our human lifetimes, biological evolution is very slow. The evolved capacities of the human brain may have been well-adapted to the survival needs of our ancestors some 300,000 years ago, but there is little reason to believe that they are uniformly well-adapted to human problem-solving in the 21st century.

Resource-bounded and ecological rationality

Throughout our review, the reader will have noticed that many heuristics have been suggested for specific problem areas. The methods of the ancient Greeks were mainly centred around solving geometrical problems. Llull was primarily concerned with theological questions, Descartes and Leibniz pursued ‘mechanical’ solutions to philosophical issues, Polya suggested heuristics for Mathematics, Müller for engineering, and Kleining for social science research. This already suggests that heuristics suitable for one type of problem need not be suitable for a different type. Likewise, the automatic heuristics that both the Kahneman-Tversky and the Gigerenzer schools focused on, are triggered by particular tasks. Simon’s observation that the success of a given heuristic will depend on the environment in which it is employed, is undoubtedly an important one that has motivated Gigerenzer’s notion of ecological rationality and is strikingly absent from the SEU model. If ‘environment’ is taken in a broad sense that includes the available resources, the cost of time and effort, the notion seems to cover what has been called resource-rational behaviour (e.g., Bhui et al., 2021 ).

Avenues of further research

A comprehensive study describing the current status of the research on heuristics and their relation to SEU seems to be missing and is beyond the scope of our brief historical review. Insights into their interrelationship can be expected from recent attempts at formal modelling of human cognition that take the issues of limited computational resources and context-dependence of decision-making seriously. E.g., Lieder and Griffiths ( 2020 ) do this from a Bayesian perspective, while Busemeyer et al. ( 2011 ) and Pothos and Busemeyer ( 2022 ) use a generalization of standard Kolmogorov probability theory that is also the basis of quantum mechanics and quantum computation. While it may seem at first glance that such modelling assumes even more computational power than the standard SEU model of decision-making, the computational power is not assumed on the part of the human decision-maker. Rather, the claim is that the decision-maker behaves as if s/he would solve an optimization problem under additional constraints, e.g., on computational resources. The ‘as if’ methodology that is employed here is well-known to economists (Friedman, 1953 ; Mäki, 1998 ) and also to mathematical biologists who have used Bayesian models to explain animal behaviour (McNamara et al., 2006 ; Oaten, 1977 ; Pérez-Escudero and de Polavieja, 2011 ). Evolutionary arguments might be invoked to support this methodology if a survival disadvantage can be shown to result from behaviour patterns that are not Bayesian optimal, but we are not aware of research that would substantiate such arguments. However, attempting to do so by embedding formal models of cognition in models of evolutionary game theory may be a promising avenue for further research.

NP stands for ‘nondeterministic polynomial-time’, which indicates that the optimal solution can be found by a nondeterministic Turing-machine in a running time that is bounded by a polynomial function of the input size. In fact, the TSP is ‘NP-hard’ which means that it is ‘at least as hard as the hardest problems in the category of NP problems’.

Agre P, Horswill I (1997) Lifeworld analysis. J Artif Intell Res 6:111–145

Article   Google Scholar  

Ayton P, Fischer I (2004) The hot hand fallacy and the gambler’s fallacy. Two faces of subjective randomness. Memory Cogn 32:8

Banse G, Friedrich K (2000) Konstruieren zwischen Kunst und Wissenschaft. Edition Sigma, Idee‐Entwurf‐Gestaltung, Berlin

Google Scholar  

Baron J (2000) Thinking and deciding. Cambridge University Press

Barron G, Leider S (2010) The role of experience in the Gambler’s Fallacy. J Behav Decision Mak 23:1

Barros G (2010) Herbert A Simon and the concept of rationality: boundaries and procedures. Brazilian. J Political Econ 30:3

Baumeister RF, Vohs KD (2007) Encyclopedia of social psychology, vol 1. SAGE

Bazerman MH, Moore DA (1994) Judgment in managerial decision making. Wiley, New York

Bentley JL (1982) Writing efficient programs Prentice-Hall software series. Prentice-Hall

Bhui R, Lai L, Gershman S (2021) Resource-rational decision making. Curr Opin Behav Sci 41:15–21. https://doi.org/10.1016/j.cobeha.2021.02.015

Bolzano B (1837) Wissenschaftslehre. Seidelsche Buchhandlung, Sulzbach

Bossaerts P, Murawski C (2017) Computational complexity and human decision-making. Trends Cogn Sci 21(12):917–929

Article   PubMed   Google Scholar  

Boyer CB (1991) The Arabic Hegemony. A History of Mathematics. Wiley, New York

Bröder A (2000) Assessing the empirical validity of the “Take-the-best” heuristic as a model of human probabilistic inference. J Exp Psychol Learn Mem Cogn 26:5

Burke E, Kendall G, Newall J, Hart E, Ross P, Schulenburg S (2003) Hyper-heuristics: an emerging direction in modern search technology. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Busemeyer JR, Pothos EM, Franco R, Trueblood JS (2011) A quantum theoretical explanation for probability judgment errors. Psychol Rev 118(2):193

Buss DM, Kenrick DT (1998) Evolutionary social psychology. In: D T Gilbert, S T Fiske, G Lindzey (eds.), The handbook of social psychology. McGraw-Hill, p. 982–1026

Byron M (1998) Satisficing and optimality. Ethics 109:1

Davis M (ed) (1965) The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions. Raven Press, New York

MATH   Google Scholar  

Debreu G (1959) Theory of value: an axiomatic analysis of economic equilibrium. Yale University Press

Descartes R (1908) Rules for the Direction of the Mind. In: Oeuvres de Descartes, vol 10. In: Adam C, Tannery P (eds). J Vrin, Paris

Descartes R (1998) Discourse on the method for conducting one’s reason well and for seeking the truth in the sciences (1637) (trans and ed: Cress D). Hackett, Indianapolis

Dunbar RIM (1998) Grooming, gossip, and the evolution of language. Harvard University Press

Duncker K (1935) Zur Psychologie des produktiven Denkens. Springer

Englich B, Mussweiler T, Strack F (2006) Playing dice with criminal sentences: the influence of irrelevant anchors on experts’ judicial decision making. Personal Soc Psychol Bull 32:2

Evans JSB (2010) Thinking twice: two minds in one brain. Oxford University Press

Farr RM (1996) The roots of modern social psychology, 1872–1954. Blackwell Publishing

Farris PW, Bendle N, Pfeifer P, Reibstein D (2010) Marketing metrics: the definitive guide to measuring marketing performance. Pearson Education

Fidora A, Sierra C (2011) Ramon Llull, from the Ars Magna to artificial intelligence. Artificial Intelligence Research Institute, Barcelona

Frantz R (2003) Herbert Simon Artificial intelligence as a framework for understanding intuition. J Econ Psychol 24:2. https://doi.org/10.1016/S0167-4870(02)00207-6

Friedman M (1953) The methodology of positive economics. In: Friedman M (ed) Essays in positive economics. University of Chicago Press

Ghiselin MT (1973) Darwin and evolutionary psychology. Science (New York, NY) 179:4077

Gibbons A (2007) Paleoanthropology. Food for thought. Science (New York, NY) 316:5831

Gigerenzer G (1996) On narrow norms and vague heuristics: a reply to Kahneman and Tversky. 1939–1471

Gigerenzer G (2000) Adaptive thinking: rationality in the real world. Oxford University Press, USA

Gigerenzer G (2008) Why heuristics work. Perspect Psychol Sci 3:1

Gigerenzer G (2015) Simply rational: decision making in the real world. Evol Cogn

Gigerenzer G (2021) Embodied heuristics. Front Psychol https://doi.org/10.3389/fpsyg.2021.711289

Gigerenzer G, Gaissmaier W (2011) Heuristic decision making. Annual Review of Psychology 62, p 451–482

Gigerenzer G, Goldstein DG (1996) Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev 103:4

Gigerenzer G, Selten R (eds) (2001) Bounded rationality: the adaptive toolbox. MIT Press

Gigerenzer G, Todd PM (1999) Simple heuristics that make us smart. Oxford University Press, USA

Gilboa I (2011) Making better decisions. Decision theory in practice. Wiley-Blackwell

Gilovich T, Griffin D (2002) Introduction—heuristics and biases: then and now in heuristics and biases: the psychology of intuitive judgment (8). Cambridge University Press

Gilovich T, Vallone R, Tversky A (1985) The hot hand in basketball: on the misperception of random sequences. Cogn Psychol 17:3

Glaveanu VP (2019) The creativity reader. Oxford University Press

Glover F, Kochenberger GA (eds) (2003) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Goldstein DG, Gigerenzer G (2002) Models of ecological rationality: the recognition heuristic. Psychol Rev 109:1

Graefe A, Armstrong JS (2012) Predicting elections from the most important issue: a test of the take-the-best heuristic. J Behav Decision Mak 25:1

Groner M, Groner R, Bischof WF (1983) Approaches to heuristics: a historical review. In: Groner R, Groner M, Bischof WF (eds) Methods of heuristics. Erlbaum

Groner R, Groner M (1991) Heuristische versus algorithmische Orientierung als Dimension des individuellen kognitiven Stils. In: Grawe K, Semmer N, Hänni R (Hrsg) Üher die richtige Art, Psychologie zu betreiben. Hogrefe, Göttingen

Gugerty L (2006) Newell and Simon’s logic theorist: historical background and impact on cognitive modelling. In: Proceedings of the human factors and ergonomics society annual meeting. Symposium conducted at the meeting of SAGE Publications. Sage, Los Angeles, CA

Harel D (2000) Computers Ltd: what they really can’t do. Oxford University Press

Harley TA (2021) The science of consciousness: waking, sleeping and dreaming. Cambridge University Press

Harris B (1979) Whatever happened to little Albert? Am Psychol 34:2

Heath TL (1926) The thirteen books of Euclid’s elements. Introduction to vol I, 2nd edn. Cambridge University Press

Hertwig R, Pachur T (2015) Heuristics, history of. In: International encyclopedia of the social behavioural sciences. Elsevier, pp. 829–835

Hilton DJ (1995) The social context of reasoning: conversational inference and rational judgment. Psychol Bull 118:2

Hopcroft JE, Motwani R, Ullman JD (2007) Introduction to Automata Theory, languages, and computation. Addison Wesley, Boston/San Francisco/New York

Jeffrey R (1967) The logic of decision, 2nd edn. McGraw-Hill

Jones S, Juslin P, Olsson H, Winman A (2000) Algorithm, heuristic or exemplar: Process and representation in multiple-cue judgment. In: Proceedings of the 22nd annual conference of the Cognitive Science Society. Symposium conducted at the meeting of Erlbaum, Hillsdale, NJ

Kahneman D (2011) Thinking, fast and slow. Farar, Straus and Giroux

Kahneman D, Klein G (2009) Conditions for intuitive expertise: a failure to disagree. Am Psychol 64:6

Kahneman D, Tversky A (1996) On the reality of cognitive illusions. In: Psychological Review, 103(3), p 582–591

Khaldun I (1967) The Muqaddimah. An introduction to history (trans: Arabic by Rosenthal F). Abridged and edited by Dawood NJ. Princeton University Press

Klein G (2001) The fiction of optimization. In: Gigerenzer G, Selten R (eds) Bounded Rationality: The Adaptive Toolbox. MIT Press Editors

Kleining G (1982) Umriss zu einer Methodologie qualitativer Sozialforschung. Kölner Z Soziol Sozialpsychol 34:2

Kleining G (1995) Von der Hermeneutik zur qualitativen Heuristik. Beltz

Lakatos I (1970) Falsification and the methodology of scientific research programmes. In: Lakatos I, Musgrave A (eds) Criticism and the growth of knowledge. Cambridge University Press

Leibniz GW (1880) Die Philosophischen Schriften von GW Leibniz IV, hrsg von CI Gerhardt

Lerner RM (1978) Nature Nurture and Dynamic Interactionism. Human Development 21(1):1–20. https://doi.org/10.1159/000271572

Lieder F, Griffiths TL (2020) Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences. Vol 43, e1. Cambridge University Press

Link D (2010) Scrambling TRUTH: rotating letters as a material form of thought. Variantology 4, p. 215–266

Llull R (1308) Ars Generalis Ultima (trans: Dambergs Y), Yanis Dambergs, https://lullianarts.narpan.net/

Luan S, Reb J, Gigerenzer G (2019) Ecological rationality: fast-and-frugal heuristics for managerial decision-making under uncertainty. Acad Manag J 62:6

Mäki U (1998) As if. In: Davis J, Hands DW, Mäki U (ed) The handbook of economic methodology. Edward Elgar Publishing

Martí R, Pardalos P, Resende M (eds) (2018) Handbook of heuristics. Springer, Cham

McDougall W (2015) An introduction to social psychology. Psychology Press

McNamara JM, Green RF, Olsson O (2006) Bayes’ theorem and its applications in animal behaviour. Oikos 112(2):243–251. http://www.jstor.org/stable/3548663

Newborn M (1997) Kasparov versus Deep Blue: computer chess comes of age. Springer

Newell A, Shaw JC, Simon HA (1959) Report on a general problem-solving program. In: R. Oldenbourg (ed) IFIP congress. UNESCO, Paris

Newell A, Simon HA (1972) Human problem solving. Prentice-Hall, Englewood Cliffs, NJ

Newell BR (2013) Judgment under uncertainty. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press

Newell BR, Weston NJ, Shanks DR (2003) Empirical tests of a fast-and-frugal heuristic: not everyone “takes the best”. Organ Behav Hum Decision Processes 91:1

Oaten A (1977) Optimal foraging in patches: a case for stochasticity. Theor Popul Biol 12(3):263–285

Article   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Oppenheimer DM (2003) Not so fast! (and not so frugal!): rethinking the recognition heuristic. Cognition 90:1

Pachur T, Marinello G (2013) Expert intuitions: how to model the decision strategies of airport customs officers? Acta Psychol 144:1

Pearl J (1984) Heuristics: intelligent search strategies for computer problem solving. Addison-Wesley Longman Publishing Co, Inc

Pérez-Escudero A, de Polavieja G (2011) Collective animal behaviour from Bayesian estimation and probability matching. Nature Precedings

Pinheiro CAR, McNeill F (2014) Heuristics in analytics: a practical perspective of what influences our analytical world. Wiley Online Library

Polya G (1945) How to solve it. Princeton University Press

Polya G (1954) Induction and analogy in mathematics. Princeton University Press

Pombo O (2002) Leibniz and the encyclopaedic project. In: Actas do Congresso Internacional Ciência, Tecnologia Y Bien Comun: La atualidad de Leibniz

Pothos EM, Busemeyer JR (2022) Quantum cognition. Annu Rev Psychol 73:749–778

Priest G (2008) An introduction to non-classical logic: from if to is. Cambridge University Press

Book   MATH   Google Scholar  

Ramsey FP (1926) Truth and probability. In: Braithwaite RB (ed) The foundations of mathematics and other logical essays. McMaster University Archive for the History of Economic Thought. https://EconPapers.repec.org/RePEc:hay:hetcha:ramsey1926

Reimer T, Katsikopoulos K (2004) The use of recognition in group decision-making. Cogn Sci 28:6

Reisberg D (ed) (2013) The Oxford handbook of cognitive psychology. Oxford University Press

Rieskamp J, Otto PE (2006) SSL: a theory of how people learn to select strategies. J Exp Psychol Gen 135:2

Ritchey T (2022) Ramon Llull and the combinatorial art. https://www.swemorph.com/amg/pdf/ars-morph-1-draft-ch-4.pdf

Ritter J, Gründer K, Gabriel G, Schepers H (2017) Historisches Wörterbuch der Philosophie online. Schwabe Verlag

Russell SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice-Hall series in artificial intelligence. Prentice-Hall

Savage LJ (ed) (1954) The foundations of statistics. Courier Corporation

Schacter D, Gilbert D, Wegner D (2011) Psychology, 2nd edn. Worth

Schaeffer J, Burch N, Bjornsson Y, Kishimoto A, Muller M, Lake R, Lu P, Sutphen S (2007) Checkers is solved. Science 317(5844):1518–1522

Article   ADS   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Schreurs BG (1989) Classical conditioning of model systems: a behavioural review. Psychobiology 17:2

Scopus (2022) Search “heuristics”. https://www.scopus.com/standard/marketing.uri (TITLE-ABS-KEY(heuristic) AND (LIMIT-TO (SUBJAREA,"DECI") OR LIMIT-TO (SUBJAREA,"SOCI") OR LIMIT-TO (SUBJAREA,"BUSI"))) Accessed on 16 Apr 2022

Searle JR (1997) The mystery of consciousness. Granta Books

Semaan G, Coelho J, Silva E, Fadel A, Ochi L, Maculan N (2020) A brief history of heuristics: from Bounded Rationality to Intractability. IEEE Latin Am Trans 18(11):1975–1986. https://latamt.ieeer9.org/index.php/transactions/article/view/3970/682

Sen S (2020) The environment in evolution: Darwinism and Lamarckism revisited. Harvest Volume 1(2):84–88. https://doi.org/10.2139/ssrn.3537393

Shah AK, Oppenheimer DM (2008) Heuristics made easy: an effort-reduction framework. Psychol Bull 134:2

Siitonen A (2014) Bolzano on finding out intentions behind actions. In: From the ALWS archives: a selection of papers from the International Wittgenstein Symposia in Kirchberg am Wechsel

Simon HA (1955) A behavioural model of rational choice. Q J Econ 69:1

Simon HA, Newell A (1958) Heuristic problem solving: the next advance in operations research. Oper Res 6(1):1–10. http://www.jstor.org/stable/167397

Article   MATH   Google Scholar  

Smith R (2020) Aristotle’s logic. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, 2020th edn. Metaphysics Research Lab, Stanford University

Smulders TV (2009) Darwin 200: special feature on brain evolution. Biology Letters 5(1), p. 105–107

Sörensen K, Sevaux M, Glover F (2018) A history of metaheuristics. In: Martí R, Pardalos P, Resende M (eds) Handbook of heuristics. Springer, Cham

Stephenson N (2003) Theoretical psychology: critical contributions. Captus Press

Strack F, Mussweiler T (1997) Explaining the enigmatic anchoring effect: mechanisms of selective accessibility. J Person Soc Psychol 73:3

Sullivan D (2002) How search engines work. SEARCH ENGINE WATCH, at http://www.searchenginewatch.com/webmasters/work.Html (Last Updated June 26, 2001) (on File with the New York University Journal of Legislation and Public Policy). http://www.searchenginewatch.com

Suppes P (1983) Heuristics and the axiomatic method. In: Groner R et al (ed) Methods of Heuristics. Routledge

Turing A (1937) On computable numbers, with an application to the entscheidungsproblem. Proc Lond Math Soc s2-42(1):230–265

Article   MathSciNet   MATH   Google Scholar  

Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5:2

Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science (New York, NY) 185::4157

Vardi MY (2012) Artificial intelligence: past and future. Commun ACM 55:1

Vikhar PA (2016) Evolutionary algorithms: a critical review and its future prospects. Paper presented at the international conference on global trends in signal processing, information computing and communication (ICGTSPICC). IEEE, pp. 261–265

Volz V, Rudolph G, Naujoks B (2016) Demonstrating the feasibility of automatic game balancing. Paper presented at the proceedings of the Genetic and Evolutionary Computation Conference, pp. 269–276

von Neumann J, Morgenstern O (1944) Theory of games and economic behaviour. Princeton University Press, Princeton, p. 1947

Zermelo E (1913) Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In: Proceedings of the fifth international congress of mathematicians. Symposium conducted at the meeting of Cambridge University Press, Cambridge. Cambridge University Press, Cambridge

Zilio D (2013) Filling the gaps: skinner on the role of neuroscience in the explanation of behavior. Behavior and Philosophy, 41, p. 33–59

Download references

Acknowledgements

We would like to extend our sincere thanks to the reviewers for their valuable time and effort in reviewing our work. Their insightful comments and suggestions have greatly improved the quality of our manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

HHL Leipzig Graduate School of Management, Leipzig, Germany

Mohamad Hjeij & Arnis Vilks

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohamad Hjeij .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hjeij, M., Vilks, A. A brief history of heuristics: how did research on heuristics evolve?. Humanit Soc Sci Commun 10 , 64 (2023). https://doi.org/10.1057/s41599-023-01542-z

Download citation

Received : 25 July 2022

Accepted : 30 January 2023

Published : 17 February 2023

DOI : https://doi.org/10.1057/s41599-023-01542-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Socialisation approach to ai value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence.

  • Joel Janhonen

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

to state that a case study is heuristic means that

Heuristics and Evidences Decision (HeED) Making: a Case Study in a Systemic Model for Transforming Decision Making from Heuristics-Based to Evidenced-Based

  • Published: 01 September 2020
  • Volume 12 , pages 1668–1693, ( 2021 )

Cite this article

  • Tariq Mahadeen 1 ,
  • Kostas Galanakis 1 ,
  • Elpida Samara 2 &
  • Pavlos Kilintzis 3  

645 Accesses

2 Citations

Explore all metrics

Studies refer to Heuristics and Evidences Decision Making approaches in a comparative manner; however, it is identified that these two approaches are inseparable and are applied in parallel. The objective of this paper is to provide a qualitative analysis of a systems thinking framework that defines a transition path from either a heuristic dominated or evidence-based dominated decision-making approach to a balanced one. The aims are to demonstrate the stages of change and prepare managers and executives for the resistance that will be evident during the transition. We do not claim that this is the only path of change; however, it provides a structured model that can be repeated under similar context. We use abductive reasoning in order to make logical inferences and construct the framework’s theory based on a case study company, and then system dynamics that help us proceed to the modeling approach of this framework. The holistic modeling approach reveals the need to base decision making in both evidence and heuristics. Furthermore, it demonstrates actions to manage resistance and to make this system a self-regulated and continuous decision-making tool.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

to state that a case study is heuristic means that

Similar content being viewed by others

to state that a case study is heuristic means that

Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations

Gordon W. Cheung, Helena D. Cooper-Thomas, … Linda C. Wang

to state that a case study is heuristic means that

Reactions towards organizational change: a systematic literature review

Khai Wah Khaw, Alhamzah Alnoor, … Nadia A. Atshan

Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach

Zachary Munn, Micah D. J. Peters, … Edoardo Aromataris

For anonymity purposes we nicknamed the firm as MiddlePharma

In this case we define “campaign strategy” as balk production of a single product family that does not require tool changeovers and cleaning, based on annual demand.

Evidence is defined in our work as an organized body of information that is used in order to justify or support conclusions (Sackett et al. 2000 ). This information may have many forms, depending on the type of activities that are going to be used for and the scientific or managerial context they refer to. For example Sackett et al. ( 2000 ) consider information, as forms of evidence that may be used for evidence- based decision making.

Often the reasoning for change is external to the firm factors, for example, the activities and innovations of the competitor organizations, developments in technology and organizational procedures, diversity in customers’ requirements, changes in national and international legislation, diversity in local and global trading and economic circumstances, and changing cultural and social conditions (Radovic-Markovic 2007 ).

Frozen referred to the timeframe to 2 weeks closer the actual production. Changes are prohibited at this stage because it would be costly to reverse the plan to purchase the materials and produce different products. Firm , 2 weeks before frozen. In these weeks changes can occur, but only in exceptional situations. Full , which means that all the available production capacity has been allocated to orders. Changes in the full section can be made and production costs will be only slightly affected, but the effect on customer satisfaction is uncertain. The last section called Open , which means that there is available capacity for new orders (based on Gaither and Frazier 2002 ).

Amason, A. C., & Mooney, A. C. (2008). The Icarus paradox revisited: how strong performance sows the seeds of dysfunction in future strategic decision-making. Strategic Organization, 6 (4), 407–434.

Google Scholar  

Armstrong, M. (2001). A handbook of human resource management practice (8th ed.). London: Kogan Page Publishers.

Arribas, I., Comeig, I., Urbano, A., & Vila, J. (2014). Statistical formats to optimize evidence- based decision making: a behavioral approach. Journal of Business Research, 67 (5), 790–794.

Bala, A., & Koxhaj, A. (2017). Key performance indicators (KPIs) in the change management of public administration. European Scientific Journal, 13 (4), 278–283.

Balogun, J., & Johnson, G. (2005). From intended strategies to unintended outcomes: the impact of change recipient sensemaking. Organization Studies, 26 , 1573–1601.

Balsvik, R. & Haller, S. A. (2015). Ownership change and its implications for the match between the plant and its workers . Dublin: UCD School of Economics. Available at: http://irserver.ucd.ie/bitstream/handle/10197/6588/WP15_12.pdf?sequence=1 [accessed on 14/10/2018].

Bartkus, B. (1997). Employee ownership as catalyst of organizational change. Journal of Organizational Change Management, 10 (4), 331–344.

Bavol’ár, J., & Orosová, O. (2015). Decision-making styles and their associations with decision- making competencies and mental health. Judgment and Decision making, 10 (1), 115–122.

Boohene, R., & Williams, A. A. (2012). Resistance to organizational change: a case study of Oti Yeboah complex limited. International Business and Management, 4 (1), 135–145.

Bray, R. (2015). Developing a participative multi criteria decision making technique: a case study. International Journal of Management and Decision Making, 14 (1), 66–80.

Bridges, W. (2003). Managing transitions (2nd ed.). Cambridge: MA, Perseus Books.

Bröder, A., & Schiffer, S. (2006). Stimulus format and working memory in fast and frugal strategy selection. Journal of Behavioral Decision Making, 19 (4), 361–380.

Buono, A. F., & Kerber, K. W. (2010). Creating a sustainable approach to change: Building organizational change capacity. SAM Advanced Management Journal, 75 (2), 4–18.

Busenitz, L. W., & Barney, J. B. (1997). Differences between entrepreneurs and managers in large organizations: biases and heuristics in strategic decision-making. Journal of Business Venturing, 12 (1), 9–30.

Carmeli, A., Tishler, A., & Edmondson, A. C. (2012). CEO relational leadership and strategic decision quality in top management teams: the role of team trust and learning from failure. Strategic Organization, 10 (1), 31–54.

Caroly, S., Coutarel, F., Landry, A., & Mary-Cheray, I. (2010). Sustainable MSD prevention: Management for continuous improvement between prevention and production. Ergonomic intervention in two assembly line companies. Applied Ergonomics, 41 , 591–599.

Chakravarthy, B., & Cho, H.-J. (2004). Managing trust and learning: an exploratory study. International Journal of Management and Decision Making, 5 (4), 333–347.

Coyle, R. G. (1996). System dynamics modeling: a practical approach . New York: Chapman & Hall.

Danesh, D., Ryan, M. J., & Abbasi, A. (2018). Multi-criteria decision-making methods for project portfolio management: a literature review. International Journal of Management and Decision Making, 17 (1), 75–94.

Dawson, P. (1994). Organizational change: a processual approach . London: Paul Chapman Publishing Limited.

Dholakia, U., & Sonenshein, S. (2012). Explaining employee engagement with strategic change implementation: a meaning-making approach. Organization Science, 23 (1), 1–23.

Dooley, L., & O’Sullivan, D. (1999). Decision support system for the management of systems change. Technovation, 19 (8), 483–493.

Eastwood, J., Snook, B., & Luther, K. (2012). What people want from their professionals: attitudes toward decision-making strategies. Journal of Behavioral Decision Making, 25 (5), 458–468.

Elbanna, S., & Child, J. (2007). Influences on strategic decision effectiveness: development and test of an integrative model. Strategic Management Journal, 28 , 431–453.

Eriksson, C. B. (2004). The effects of change programs on employees’ emotions. Personnel Review, 33 (1), 110–126.

Fernando, R., Fernando, S., & Nepomuceno-Fernández, A. (2013). An epistemic and dynamic approach to abductive reasoning: abductive problem and abductive solution. Journal of Applied Logic, 11 (4), 505–522.

Fiegenbaum, A., & Thomas, H. (1995). Strategic groups as reference groups: theory, modeling and empirical examination of industry and competitive strategy. Strategic Management Journal, 16 (6), 461–476.

Forrester, J. W. (1961). Industrial dynamics . Cambridge: MIT Press.

Forrester, J. W., & Senge, P. M. (1980). Tests for building confidence in system dynamics models. In A. A. Legasto, J. W. Forrester, & J. M. Lyneis (Eds.), System dynamics, studies in the management sciences (pp. 209–228). North Holland Publishing Company.

Friedman, R. S., & Prusak, L. (2008). On heuristics, narrative and knowledge management. Technovation, 28 (12), 812–817.

Gaither, N., & Frazier, G. (2002). Production and operations management . Sao Paulo: Thomson Learning.

Galanakis, K. (2006). Innovation process. Make sense using systems thinking. Technovation, 26 (11), 1222–1232.

Garcia-Sabater, J. J., & Marin-Garcia, J. A. (2011). Can we still talk about continuous improvement? Rethinking enablers and inhibitors for successful implementation. International Journal Technology Management, 55 (1/2), 28–42.

Gibbons, B. (2013). Cyert and March (1963) at Fifty: a perspective from organizational economics. MIT and NBER April 7, 2013 Prepared for NBER Working Group in Organizational Economics SIEPR, April 12–13, 2013.

Gigerenzer, G. (2001). The adaptive toolbox. In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: the adaptive toolbox (pp. 37–50). Cambridge: MIT Press.

Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). Simple heuristics that make us smart . Oxford: Oxford UP.

Gold, J., Cureton, P., & Anderson, L. (2010). Theorising and practitioners in HRD: The role of abductive reasoning. Journal of European Industrial Training, 35 (3), 230–246.

Haque, B., Pawar, K. S., & Barson, R. J. (2003). The application of business process modeling to organisational analysis of concurrent engineering environments. Technovation, 23 (2), 147–162.

Heckmann, N., Steger, T., & Dowling, M. (2016). Organizational capacity for change, change experience, and change project performance. Journal of Business Research, 69 (2), 777–784.

Hofmann, D. A. (2015). Overcoming the obstacles to cross-functional decision making: laying the groundwork for collaborative problem solving. Organizational Dynamics, 44 (1), 17–25.

Huy, Q. N., Corley, K. G., & Kraatz, M. S. (2014). From support to mutiny: shifting legitimacy judgments and emotional reactions impacting the implementation of radical change. Academy of Management Journal, 57 (6), 1650–1680.

Ionescu, E. I., Merut, A., & Dragomiroiu, R. (2014). Role of managers in management of change. In 21st International Economic Conference (pp. 293–298). Sibiu: Elsevier.

Jones, L., Watson, B., Hobman, E., Bordia, P., Gallois, P., & Callan, V. J. (2008). Employee perceptions of organizational change: impact of hierarchical level. Leadership & Organizational Development Journal, 29 (4), 294–316.

Judge, W. Q., & Elenkov, D. (2005). Organizational capacity for change and environmental performance: an empirical assessment of Bulgarian firms. Journal of Business Research, 58 (7), 893–901.

Julnes, P. D., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: an empirical study of factors affecting adoption and implementation. Public Administration Review, 61 , 693–708.

Kilintzis, P., Samara, E., Carayannis, E., & Bakouros, Y. (2020). Business model innovation in Greece: Its effect on organizational sustainability. Journal of the Knowledge Economy, 11 (3), 949–967.

Kotter, J. P. (2007). Leading change: Why transformation efforts fail. Harvard Business Review. [Online] pp. 1–11. Available at: https://hbr.org/1995/05/leading-change-why-transformation-efforts-fail-2 [Accessed on 14/10/2018].

Kotter, J., & Schlesinger, L. (1979). Choosing strategies for change. Harvard Business Review, 57 , 106–114.

Krabuanrat, K., & Phelps, R. (1998). Heuristics and rationality in strategic decision making: an exploratory study. Journal of Business Research, 41 (1), 83–93.

Krawczyk, M. W., & Rachubik, J. (2019). The representativeness heuristic and the choice of lottery tickets: a field experiment. Judgment and Decision making, 14 (1), 51–57.

LaValle, S., Lesser, E., Shockley, R., Hopkins, M. S., & Kruschwitz, N. (2010). Big data, analytics and the path from insights to value. MIT Sloan Management Review, 52 (2), 21–31.

LeRoux, K., & Wright, N. S. (2010). Does performance measurement improve strategic decision making? Findings from a national survey of nonprofit social service agencies. Nonprofit and Voluntary Sector Quarterly, 39 (4), 571–587.

Li, L. (2005). The effects of trust and shared vision on inward knowledge transfer in subsidiaries’ intra- and inter-organizational relationships. International Business Review, 14 (1), 77–95.

Lindley, D. V. (2000). The philosophy of statistics. Journal of the Royal Statistical Society Series D, 49 (3), 293–319.

Maani, K., & Maharaj, V. (2004). Links between systems thinking and complex decision making. System Dynamics Review, 20 (1), 21–48.

Maldonado, M. & Grobbelaar, S. (2017). System dynamics modeling in the innovation systems literature. In : 15 th Globelics International Conference. Athens: Globelics, pp. 1–32.

Marsee, J. (2002). Ten steps for implementing change . Vancouver: Nacubo.

Martin, A., & Moon, P. (1992). Purchasing decisions, partial knowledge, and economic search- experimental and simulation evidence. Journal of Behavioral Decision Making, 5 (4), 253–266.

Min, D. J., & Cunha, M. (2019). The influence of horizontal and vertical product attribute information on decision making under risk: the role of perceived competence. Journal of Business Research, 97 (C), 174–183.

Mingers, J., & White, L. (2010). A review of the recent contribution of systems thinking to operational research. European Journal of Operational Research, 207 (3), 1147–1161.

Nepomuceno-Fernández, A., Soler-Toscano, F., & Velazquez-Quesada, F. R. (2013). Journal of Applied Logic, 11 (4), 505–522.

Nutley, S., & Davies, H. T. O. (2000). Making a reality of evidence-based practice: some lessons from the diffusion of innovations. Public Money & Management, 20 (4), 35–42.

Ooi, K. B., & Arumugam, V. (2006). The influence of corporate culture on organisational commitment: case study of semiconductor organisations in Malaysia. Sunway Academic Journal, 3 , 99–115.

Oreg, S. (2003). Resistance to change: developing an individual differences measure. Journal of Applied Psychology, 88 (4), 680–693.

Pachur, T., & Forrer, E. A. (2013). Selection of decision strategies after conscious and unconscious thought. Journal of Behavioral Decision Making, 26 (5), 477–488.

Pachur, T., Bröder, A., & Marewski, J. N. (2008). The recognition heuristic in memory-based inference: is recognition a non-compensatory cue? Journal of Behavioural Decision Making, 21 (2), 183–210.

Parnell, G. S., Driscoll, P. J., & Henderson, D. L. (Eds.). (2011). Decision making in systems engineering and management . New Jersey: John Wiley & Sons Inc..

Pfeffer, J., & Sutton, R. I. (In press). Hard facts, dangerous half-truths, and total nonsense: Profiting from evidence-based management . Boston: Harvard Business School Press.

Phipps, A. G. (1988). Rational versus heuristic decision making during residential search. Geographical Analysis, 20 (3), 231–248.

Piderit, S. K. (2000). Rethinking resistance and recognizing ambivalence. Academy of Management Review, 25 (4), 783–794.

Radovic-Markovic, M. (2007). The perspective of women's entrepreneurship in the age of globalization . Charlotte: Information Age Publishing Inc..

Raineri, A. B. (2011). Change management practices: impact on perceived change results. Journal of Business Research, 64 (3), 266–272.

Rousseau, D. M. (2012). Envisioning evidence-based management. The Oxford handbook of evidence-based management , pp. 3–24.

Rousseau, D. M. (2018). Making evidence-based organizational decisions in an uncertain world. Organizational Dynamics, 47 , 135–146.

Rugman, A., & Hodgetts, R. (2001). The end of global strategy. European Management Journal, 19 (4), 332–344.

Rumelt, P. R. (2011). Good strategy/bad strategy: the difference and why it matters . New York: Crowd Publishing Group.

Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: how to practice and teach EBM . New York: Churchill Livingstone.

Samara, E., Georgiadis, P., & Bakouros, I. (2012). The impact of innovation policies on the performance of national innovation systems: a system dynamics analysis. Technovation, 32 (11), 624–638.

Sarasin, F. P. (1999). Decision analysis and the implementation of evidence-based medicine. Monthly Journal of the Association of Physicians, 92 (11), 669–671.

Schein, E. (2004). Organizational culture and leadership (3rd ed.). San Francisco: Jossey-Bass.

Schermerhorn, J. G., Hunt, J. G., & Osborn, R. N. (2005). Organizational behavior (9th ed.). New York: John Wiley & Sons Inc..

Schilling, M. A., & Steensma, H. K. (2001). The use of modular organizational forms. Academy of Management Journal, 44 , 1149–1168.

Schuldt, J. P., Chabris, C. F., Williams Woolley, A., & Hackman, J. R. (2017). Confidence in dyadic decision making: the role of individual differences. Journal of Behavioural Decision Making, 30 , 168–118.

Senge, P. M. (1990). The fifth discipline. The Art and Practice of the Learning Organisation. Century Business.

Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological Bulletin, 134 (2), 207–222.

Shattuck, L. G., & Miller, N. L. (2006). Extending naturalistic decision making to complex organizations: a dynamic model of situated cognition. Organization Studies, 27 (7), 989–1009.

Sherman, L. W. (2002). Evidence-based policing: social organization of information for social control. In E. Waring & D. Weisburd (Eds.), Crime and social organization (pp. 217–248). New Brunswick: Transaction Publishers.

Skyttner, L. (2001). General systems theory: ideas & publications . Singapore: World Scientific Publishing.

Skyttner, L. (2006). General systems theory: problems, perspectives, practice (2nd ed.). NJ: World Scientific Publishing.

Snyder, S. (2013). The simple, the complicated, and the complex: educational reform through the lens of complexity theory . OECD Education Working Papers, No. 96, OECD Publishing, pp. 11.

Sondoss, E., Guillaume, J. H. A., Filatova, T., Josefine, R., & Jakeman, A. J. (2015). A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent- based models. Journal of Environmental Management, 151 , 500–516.

Sterman, J. D. (2000). Business dynamics: systems thinking and modeling for a complex world . Boston: Irwin McGraw-Hill.

Stewart, T. J. (1992). A critical survey on the status of multiple criteria decision making theory and practice. Omega, 20 (5), 569–586.

Stragalas, N. (2010). Improving change implementation. OD Practitioner: Organization Development Network, 42 (1), 31–38.

Svenson, O., Gonzalez, N., & Eriksson, G. (2018). Different heuristics and same bias: a spectral analysis of biased judgments and individual decision rules. Judgment and Decision making, 13 (5), 401–412.

Svensson, J. (2004). Managing legitimacy in hybrid governance. In NIG Annual Work Conference . [online] Rotterdam: University of Twente, pp 1–17. Available at: file:///C:/Users/pkili/Downloads/NIG4-01.pdf [Accessed on 14/10/2018].

Sverdrup, T. E., & Stensaker, I. G. (2017). Restoring trust in the context of strategic change. Strategic Organization , 1–28.

Talebi, S. (2015). Employee ownership as a driver of the need for change in organizations. European Scientific Journal, 11 (1), 130–137.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18 (7), 509–533.

Trader-Leigh, K. E. (2001). Case study: identifying resistance in managing change. Journal of Organizational Change Management, 15 (2), 138–155.

Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence- informed management knowledge by means of systematic review. British Journal of Management, 14 (3), 207–222.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185 (4157), 1124–1131.

Venkata, R. (2007). Decision making in the manufacturing environment . Springer Series in Advanced Manufacturing.

Vishwanath, V. B., & Farimah, H. (2012). Toward a theory of evidence based decision making. Management Decision, 50 (5), 832–867.

Vlaev, I., & Dolan, P. (2015). Action change theory: a reinforcement learning perspective on behavior change. Review of General Psychology, 19 (1), 69–95.

Walumbwa, F. O., Maidique, M. Q., & Atamanik, C. (2014). Decision-making in a crisis: what every leader needs to know. Organizational Dynamics, 43 (4), 284–293.

Weber, J. M. (2019). Individuals matter, but the situation’s the thing: the case for a habitual situational lens in leadership and organizational decision-making. Organizational Dynamics, 49 (1), 1–8.

Wyer, R. S., & Srull, T. K. (1986). Human cognition in its social context. Psychological Review, 93 (3), 322–359.

Yilmaz, S., Daly, S. R., Seifert, C. M., & Gonzalez, R. (2016). Evidence-based design heuristics for idea generation. Design Studies, 46 , 95–124.

Yurtseven, M. K., & Buchanan, W. W. (2016). Decision making and systems thinking: educational issues. American Journal of Engineering Education, 7 (1), 19–28.

Download references

Author information

Authors and affiliations.

Nottingham Business School, Nottingham Trent University, 50 Shakespeare Street, Nottingham, NG1 4QU, UK

Tariq Mahadeen & Kostas Galanakis

Department of Regional Development & Cross Border Studies, University of Western Macedonia, Kozani, 50100, Greece

Elpida Samara

Department of Mechanical Engineering, Univeristy of Western Macedonia, Kozani, 50132, Greece

Pavlos Kilintzis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Pavlos Kilintzis .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Mahadeen, T., Galanakis, K., Samara, E. et al. Heuristics and Evidences Decision (HeED) Making: a Case Study in a Systemic Model for Transforming Decision Making from Heuristics-Based to Evidenced-Based. J Knowl Econ 12 , 1668–1693 (2021). https://doi.org/10.1007/s13132-020-00688-4

Download citation

Received : 24 April 2020

Accepted : 24 August 2020

Published : 01 September 2020

Issue Date : December 2021

DOI : https://doi.org/10.1007/s13132-020-00688-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based decision making
  • Change process
  • Resistance to change
  • System dynamics modeling
  • Find a journal
  • Publish with us
  • Track your research

Skip navigation

  • Log in to UX Certification

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

The theory behind heuristic evaluations.

Portrait of Jakob Nielsen

November 1, 1994 1994-11-01

  • Email article
  • Share on LinkedIn
  • Share on Twitter

Heuristic evaluation   (Nielsen and Molich, 1990; Nielsen 1994) is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics").

In general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators. Figure 1 shows an example from a case study of heuristic evaluation where 19 evaluators were used to find 16 usability problems in a voice response system allowing customers access to their bank accounts (Nielsen 1992). Each of the black squares in Figure 1 indicates the finding of one of the usability problems by one of the evaluators. The figure clearly shows that there is a substantial amount of nonoverlap between the sets of usability problems found by different evaluators. It is certainly true that some usability problems are so easy to find that they are found by almost everybody, but there are also some problems that are found by very few evaluators. Furthermore, one cannot just identify the best evaluator and rely solely on that person's findings. First, it is not necessarily true that the same person will be the best evaluator every time. Second, some of the hardest-to-find usability problems (represented by the leftmost columns in Figure 1) are found by evaluators who do not otherwise find many usability problems. Therefore, it is necessary to involve multiple evaluators in any heuristic evaluation (see below for a discussion of the best number of evaluators). My recommendation is normally to use three to five evaluators since one does not gain that much additional information by using larger numbers.

Heuristic evaluation is performed by having each individual evaluator inspect the interface alone. Only after all evaluations have been completed are the evaluators allowed to communicate and have their findings aggregated. This procedure is important in order to ensure independent and unbiased evaluations from each evaluator. The results of the evaluation can be recorded either as written reports from each evaluator or by having the evaluators verbalize their comments to an observer as they go through the interface. Written reports have the advantage of presenting a formal record of the evaluation, but require an additional effort by the evaluators and the need to be read and aggregated by an evaluation manager. Using an observer adds to the overhead of each evaluation session, but reduces the workload on the evaluators. Also, the results of the evaluation are available fairly soon after the last evaluation session since the observer only needs to understand and organize one set of personal notes, not a set of reports written by others. Furthermore, the observer can assist the evaluators in operating the interface in case of problems, such as an unstable prototype, and help if the evaluators have limited domain expertise and need to have certain aspects of the interface explained.

In a user test situation, the observer (normally called the "experimenter") has the responsibility of interpreting the user's actions in order to infer how these actions are related to the usability issues in the design of the interface. This makes it possible to conduct user testing even if the users do not know anything about user interface design. In contrast, the responsibility for analyzing the user interface is placed with the evaluator in a heuristic evaluation session, so a possible observer only needs to record the evaluator's comments about the interface, but does not need to interpret the evaluator's actions.

Two further differences between heuristic evaluation sessions and traditional user testing are the willingness of the observer to answer questions from the evaluators during the session and the extent to which the evaluators can be provided with hints on using the interface. For traditional user testing, one normally wants to discover the mistakes users make when using the interface; the experimenters are therefore reluctant to provide more help than absolutely necessary. Also, users are requested to discover the answers to their questions by using the system rather than by having them answered by the experimenter. For the heuristic evaluation of a domain-specific application, it would be unreasonable to refuse to answer the evaluators' questions about the domain, especially if nondomain experts are serving as the evaluators. On the contrary, answering the evaluators' questions will enable them to better assess the usability of the user interface with respect to the characteristics of the domain. Similarly, when evaluators have problems using the interface, they can be given hints on how to proceed in order not to waste precious evaluation time struggling with the mechanics of the interface. It is important to note, however, that the evaluators should not be given help until they are clearly in trouble and have commented on the usability problem in question.

Typically, a heuristic evaluation session for an individual evaluator lasts one or two hours. Longer evaluation sessions might be necessary for larger or very complicated interfaces with a substantial number of dialogue elements, but it would be better to split up the evaluation into several smaller sessions, each concentrating on a part of the interface.

During the evaluation session, the evaluator goes through the interface several times and inspects the various dialogue elements and compares them with a list of recognized usability principles (the heuristics). These heuristics are general rules that seem to describe common properties of usable interfaces. In addition to the checklist of general heuristics to be considered for all dialogue elements, the evaluator obviously is also allowed to consider any additional usability principles or results that come to mind that may be relevant for any specific dialogue element. Furthermore, it is possible to develop category-specific heuristics that apply to a specific class of products as a supplement to the general heuristics. One way of building a supplementary list of category-specific heuristics is to perform competitive analysis and user testing of existing products in the given category and try to abstract principles to explain the usability problems that are found (Dykstra 1993).

In principle, the evaluators decide on their own how they want to proceed with evaluating the interface. A general recommendation would be that they go through the interface at least twice, however. The first pass would be intended to get a feel for the flow of the interaction and the general scope of the system. The second pass then allows the evaluator to focus on specific interface elements while knowing how they fit into the larger whole.

Since the evaluators are not using the system as such (to perform a real task), it is possible to perform heuristic evaluation of user interfaces that exist on paper only and have not yet been implemented (Nielsen 1990). This makes heuristic evaluation suited for use early in the usability engineering lifecycle.

If the system is intended as a walk-up-and-use interface for the general population or if the evaluators are domain experts, it will be possible to let the evaluators use the system without further assistance. If the system is domain-dependent and the evaluators are fairly naive with respect to the domain of the system, it will be necessary to assist the evaluators to enable them to use the interface. One approach that has been applied successfully is to supply the evaluators with a typical usage scenario , listing the various steps a user would take to perform a sample set of realistic tasks. Such a scenario should be constructed on the basis of a task analysis of the actual users and their work in order to be as representative as possible of the eventual use of the system.

The output from using the heuristic evaluation method is a list of usability problems in the interface with references to those usability principles that were violated by the design in each case in the opinion of the evaluator. It is not sufficient for evaluators to simply say that they do not like something; they should explain why they do not like it with reference to the heuristics or to other usability results. The evaluators should try to be as specific as possible and should list each usability problem separately. For example, if there are three things wrong with a certain dialogue element, all three should be listed with reference to the various usability principles that explain why each particular aspect of the interface element is a usability problem. There are two main reasons to note each problem separately: First, there is a risk of repeating some problematic aspect of a dialogue element, even if it were to be completely replaced with a new design, unless one is aware of all its problems. Second, it may not be possible to fix all usability problems in an interface element or to replace it with a new design, but it could still be possible to fix some of the problems if they are all known.

Heuristic evaluation does not provide a systematic way to generate fixes to the usability problems or a way to assess the probable quality of any redesigns. However, because heuristic evaluation aims at explaining each observed usability problem with reference to established usability principles, it will often be fairly easy to generate a revised design according to the guidelines provided by the violated principle for good interactive systems. Also, many usability problems have fairly obvious fixes as soon as they have been identified.

For example, if the problem is that the user cannot copy information from one window to another, then the solution is obviously to include such a copy feature. Similarly, if the problem is the use of inconsistent typography in the form of upper/lower case formats and fonts, the solution is obviously to pick a single typographical format for the entire interface. Even for these simple examples, however, the designer has no information to help design the exact changes to the interface (e.g., how to enable the user to make the copies or on which of the two font formats to standardize).

One possibility for extending the heuristic evaluation method to provide some design advice is to conduct a debriefing session after the last evaluation session. The participants in the debriefing should include the evaluators, any observer used during the evaluation sessions, and representatives of the design team. The debriefing session would be conducted primarily in a brainstorming mode and would focus on discussions of possible redesigns to address the major usability problems and general problematic aspects of the design. A debriefing is also a good opportunity for discussing the positive aspects of the design, since heuristic evaluation does not otherwise address this important issue.

Heuristic evaluation is explicitly intended as a "discount usability engineering" method. Independent research (Jeffries et al. 1991) has indeed confirmed that heuristic evaluation is a very efficient usability engineering method. One of my case studies found a benefit-cost ratio for a heuristic evaluation project of 48: The cost of using the method was about $10,500 and the expected benefits were about $500,000 (Nielsen 1994). As a discount usability engineering method, heuristic evaluation is not guaranteed to provide "perfect" results or to find every last usability problem in an interface.

In This Article:

Determining the number of evaluators.

In principle, individual evaluators can perform a heuristic evaluation of a user interface on their own, but the experience from several projects indicates that fairly poor results are achieved when relying on single evaluators. Averaged over six of my projects, single evaluators found only 35 percent of the usability problems in the interfaces. However, since different evaluators tend to find different problems, it is possible to achieve substantially better performance by aggregating the evaluations from several evaluators. Figure 2 shows the proportion of usability problems found as more and more evaluators are added. The figure clearly shows that there is a nice payoff from using more than one evaluator. It would seem reasonable to recommend the use of about five evaluators, but certainly at least three. The exact number of evaluators to use would depend on a cost-benefit analysis. More evaluators should obviously be used in cases where usability is critical or when large payoffs can be expected due to extensive or mission-critical use of a system.

Nielsen and Landauer (1993) present such a model based on the following prediction formula for the number of usability problems found in a heuristic evaluation:

ProblemsFound( i ) = N(1 - (1-l) i )

where ProblemsFound( i ) indicates the number of different usability problems found by aggregating reports from i independent evaluators, N indicates the total number of usability problems in the interface, and l indicates the proportion of all usability problems found by a single evaluator. In six case studies (Nielsen and Landauer 1993), the values of l ranged from 19 percent to 51 percent with a mean of 34 percent. The values of N ranged from 16 to 50 with a mean of 33. Using this formula results in curves very much like that shown in Figure 2, though the exact shape of the curve will vary with the values of the parameters N and l , which again will vary with the characteristics of the project.

In order to determine the optimal number of evaluators, one needs a cost-benefit model of heuristic evaluation. The first element in such a model is an accounting for the cost of using the method, considering both fixed and variable costs. Fixed costs are those that need to be paid no matter how many evaluators are used; these include time to plan the evaluation, get the materials ready, and write up the report or otherwise communicate the results. Variable costs are those additional costs that accrue each time one additional evaluator is used; they include the loaded salary of that evaluator as well as the cost of analyzing the evaluator's report and the cost of any computer or other resources used during the evaluation session. Based on published values from several projects the fixed cost of a heuristic evaluation is estimated to be between $3,700 and $4,800 and the variable cost of each evaluator is estimated to be between $410 and $900.

The actual fixed and variable costs will obviously vary from project to project and will depend on each company's cost structure and on the complexity of the interface being evaluated. For illustration, consider a sample project with fixed costs for heuristic evaluation of $4,000 and variable costs of $600 per evaluator. In this project, the cost of using heuristic evaluation with i evaluators is thus $(4,000 + 600 i ).

The benefits from heuristic evaluation are mainly due to the finding of usability problems, though some continuing education benefits may be realized to the extent that the evaluators increase their understanding of usability by comparing their own evaluation reports with those of other evaluators. For this sample project, assume that it is worth $15,000 to find each usability problem, using a value derived by Nielsen and Landauer (1993) from several published studies. For real projects, one would obviously need to estimate the value of finding usability problems based on the expected user population. For software to be used in-house, this value can be estimated based on the expected increase in user productivity; for software to be sold on the open market, it can be estimated based on the expected increase in sales due to higher user satisfaction or better review ratings. Note that real value only derives from those usability problems that are in fact fixed before the software ships. Since it is impossible to fix all usability problems, the value of each problem found is only some proportion of the value of a fixed problem.

Figure 3 shows the varying ratio of the benefits to the costs for various numbers of evaluators in the sample project. The curve shows that the optimal number of evaluators in this example is four, confirming the general observation that heuristic evaluation seems to work best with three to five evaluators. In the example, a heuristic evaluation with four evaluators would cost $6,400 and would find usability problems worth $395,000.

  • Dykstra, D. J. 1993. A Comparison of Heuristic Evaluation and Usability Testing: The Efficacy of a Domain-Specific Heuristic Checklist . Ph.D. diss., Department of Industrial Engineering, Texas A&M University, College Station, TX.
  • Jeffries, R., Miller, J. R., Wharton, C., and Uyeda, K. M. 1991. User interface evaluation in the real world: A comparison of four techniques. Proceedings ACM CHI'91 Conference (New Orleans, LA, April 28-May 2), 119-124.
  • Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33 , 3 (March), 338-348.
  • Nielsen, J. 1990. Paper versus computer implementations as mockup scenarios for heuristic evaluation. Proc. IFIP INTERACT90 Third Intl. Conf. Human-Computer Interaction (Cambridge, U.K., August 27-31), 315-320.
  • Nielsen, J., and Landauer, T. K. 1993. A mathematical model of the finding of usability problems. Proceedings ACM/IFIP INTERCHI'93 Conference (Amsterdam, The Netherlands, April 24-29), 206-213.
  • Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 1-5 April), 249-256.
  • Nielsen, J. 1992. Finding usability problems through heuristic evaluation. Proceedings ACM CHI'92 Conference (Monterey, CA, May 3-7), 373-380.
  • Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods . John Wiley & Sons, New York, NY.

Related Topics

  • Heuristic Evaluation Heuristic Evaluation

Learn More:

to state that a case study is heuristic means that

The UX of Phone Trees

Tanner Kohler · 5 min

to state that a case study is heuristic means that

Discount Usability Revisited (Jakob Nielsen Keynote)

Jakob Nielsen · 36 min

to state that a case study is heuristic means that

Heuristic Evaluation of User Interfaces

Jakob Nielsen · 3 min

Related Articles:

Technology Transfer of Heuristic Evaluation and Usability Inspection

Jakob Nielsen · 19 min

Characteristics of Usability Problems Found by Heuristic Evaluation

Jakob Nielsen · 5 min

Severity Ratings for Usability Problems

Summary of Usability Inspection Methods

Jakob Nielsen · 1 min

10 Usability Heuristics Applied to Video Games

Alita Joyce · 10 min

Visibility of System Status (Usability Heuristic #1)

Aurora Harley · 7 min

IMAGES

  1. 22 Heuristics Examples (The Types of Heuristics)

    to state that a case study is heuristic means that

  2. Heuristics In Psychology: Definition & Examples

    to state that a case study is heuristic means that

  3. The heuristic research and case study processes

    to state that a case study is heuristic means that

  4. What Is A Heuristic And Why Heuristics Matter In Business

    to state that a case study is heuristic means that

  5. What is a Case Study? [+6 Types of Case Studies]

    to state that a case study is heuristic means that

  6. How to Create a Case Study + 14 Case Study Templates

    to state that a case study is heuristic means that

VIDEO

  1. PREPARING FOR 10TH BOARD EXAM VLOG

  2. Heuristic Search in Artificial Intelligence

  3. Murray State University Case Study

  4. Mamokete Destroys State Case, Zandi assist in the process. #senzomeyiwa #news #ironhitz

  5. #4 The Fight: What I Unearthed Through My Experience With “Performative” Masculinity? w/ Dr. Freddy

  6. Must Watch Video!! The origin of the word "Bitch" and what it means

COMMENTS

  1. Heuristic evaluation: Definition, case study, template

    The heuristic evaluation method’s main goal is to evaluate the usability quality of an interface based on a set of principles, based on UX best practices. From the identification of the problems, it is possible to provide practical recommendations and consequently improve the user experience.

  2. Redefining Case Study - SAGE Journals

    We also propose a more precise and encompassing definition that reconciles various definitions of case study research: case study is a transparadigmatic and transdisciplinary heuristic that involves the careful delineation of the phenomena for which evidence is being collected (event, concept, program, process, etc.).

  3. Heuristics: Definition, Examples, and How They Work

    Heuristics are mental shortcuts that allow people to solve problems and make judgments quickly and efficiently. These rule-of-thumb strategies shorten decision-making time and allow people to function without constantly stopping to think about their next course of action. However, there are both benefits and drawbacks of heuristics.

  4. Heuristics In Psychology: Definition & Examples

    Psychologists refer to these efficient problem-solving techniques as heuristics. A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

  5. What Is the Affect Heuristic? | Example & Definition - Scribbr

    The affect heuristic is a possible explanation for a range of purchase decisions, such as buying insurance. Example: Affect heuristic and insurance. In a study examining how people’s feelings impact their willingness to buy insurance, participants were presented with two scenarios regarding an antique clock. In both scenarios, the value of ...

  6. Case Studies: Using Heuristics | SpringerLink

    This case study aims to show how the Mathematical Programming Model for this problem is used to develop, and validate, a heuristic means of deriving optimal solutions to such a problem. The initial models consider a case where there is a single product, and all working is carried out during the normal working hours.

  7. The Recognition Heuristic: A Review of Theory and Tests

    The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the ...

  8. A brief history of heuristics: how did research on heuristics ...

    The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or ...

  9. Heuristics and Evidences Decision (HeED) Making: a Case Study ...

    Studies refer to Heuristics and Evidences Decision Making approaches in a comparative manner; however, it is identified that these two approaches are inseparable and are applied in parallel. The objective of this paper is to provide a qualitative analysis of a systems thinking framework that defines a transition path from either a heuristic dominated or evidence-based dominated decision-making ...

  10. The Theory Behind Heuristic Evaluations, by Jakob Nielsen

    Independent research (Jeffries et al. 1991) has indeed confirmed that heuristic evaluation is a very efficient usability engineering method. One of my case studies found a benefit-cost ratio for a heuristic evaluation project of 48: The cost of using the method was about $10,500 and the expected benefits were about $500,000 (Nielsen 1994).