• Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Thinking and Reasoning

  • < Previous chapter
  • Next chapter >

4 Knowledge Representation

Arthur B. Markman, Department of Psychology, University of Texas, Austin, TX

  • Published: 21 November 2012
  • Cite Icon Cite
  • Permissions Icon Permissions

Theories in psychology make implicit or explicit assumptions about the way people store and use information. The choice of a format for knowledge representation is crucial, because it influences what processes are easy or hard for a system to accomplish. In this chapter, I define the concept of a representation. Then, I review three broad types of representations that have been incorporated into many theories. Finally, I examine proposals for the role of body states in representation as well as proposals that the concept of knowledge representation has outlived its usefulness.

Introduction

Theories of psychological functioning routinely make assumptions about the type of information that people use to carry out a process. They also make proposals for the form in which that information is stored and the procedures by which it is used. The type, form, and use of information by psychological mechanisms are incorporated into psychological proposals about knowledge representation . This chapter aims to provide a broad introduction to knowledge representation (see Markman, 1999 , for a more detailed discussion of these issues).

In this chapter, I start by defining what I mean by a representation. Then, I discuss some of the kinds of information that people have proposed to be central to people's mental representations. Next, I describe three proposals for the way that people's knowledge is structured. Finally, I explore some broader controversies within the field. For instance, I describe an antirepresentationalist argument that suggests that we can safely dispense with the notion of knowledge representation. I end with a call for a pluralist approach to representation.

Mental Representations

The modern notion of a mental representation emerged during the cognitive revolution of the 1950s, when the computational view of the mind ascended. The behaviorist approach to psychology that played a significant role in American psychology explicitly denied that the form and content of people's knowledge were legitimate objects of scientific study. Advances in the development of digital computers, however, provided a theoretical basis for thinking about how information could be stored and manipulated in a device.

On this computational view, minds are descriptions of the nature of the program that is implemented (in humans) by brains . Just as computers with very different hardware architectures could implement the same word-processing program (and thus make use of the same data structures and algorithms), different brains might compute the same sorts of functions and thus create representations (mental data structures) and processes (mental algorithms). Thus, proposals for knowledge representation are often stated at a level of description that abstracts across the details of what the brain is doing to implement that process yet nonetheless specifies in some detail how some functions are computed. Marr ( 1982 ) called this the algorithmic level of description. He called the abstract function being computed by an algorithm the computational level (see Griffiths, Tenenbaum, & Kemp, Chapter 3 ). What the brain is doing, which Marr ( 1982 ) called the implementational level of description, may provide some constraints on our understanding of these mental representations (see Morrison & Knowlton, Chapter 6 ; Green & Dunbar, Chapter 7 ), but not all of the details of an implementation are necessary to understand the way the mind works.

Example of the four key aspects of a definition of a representation. A representing world corresponds to a represented world through some set of representing relations. Processes must be specified that make use of the information in the representation.

Defining Mental Representation

To define the concept of representation, I draw from work by Palmer ( 1978 ) and Pylyshyn ( 1980 ). In order for something to qualify as a representation, four conditions have to hold. First, there has to be some representing world . The representing world is the domain serving as the representation. In theories of psychological processing, this representing world may be a collection of symbols or a multidimensional space. In Figure 4.1 , mental symbols for the numbers 0–9 are used to represent numerical quantities.

There is also a represented world . The represented world is the domain or information that is to be represented. There is almost always more information in the world than can be captured by some representing world. Thus, the act of representing some world almost always leads to some loss of fidelity (Holland, Holyoak, Nisbett, & Thagard, 1986 ). In the example in Figure 4.1 , representing the world of numbers with mental symbols will lose information, because the space of numbers is continuous, while this symbolic representation has discrete elements in it.

The third component of a representation is a set of representing relations that determines how the represented world stands in for the represented world. In Figure 4.1 , these representing relations are shown as an arrow connecting the representing world to the represented world. There has to be some kind of relationship between the items that are used as the representation and the information being represented. These relations are what give the representation its meaning. I'll discuss this issue in more detail in the next section.

Finally, in order for something to function as a representation, there has to be some set of processes that use the information in the representation for some function. Without some processes that act on the representing world, the potential information in the representation is inert. In the case of the number representation in Figure 4.1 , there need to be procedures that generate an ordering of the numbers and procedures that specify operations such as how to add or multiply or create other functions defined over numbers.

One reason why it is important to specify both the represented world and the processes is that it is tempting for readers to look at a representing world and have intuitions about the information that is captured in it. For example, you probably know a lot about numbers, and so you may bring that knowledge to bear when you see that the symbols in Figure 4.1 are associated with numbers. However, unless the system that is going to use this representation has procedures for manipulating these symbols, then the system that has this representing world does not have the same degree of knowledge that you do.

The definition that I just gave does not distinguish between mental representations and other things that serve as representations. For example, I may take a digital photograph as a representation of a visual scene. The photograph represents color by having a variety of pixels bunched close together that take on the wavelength of light that struck a detector when the picture was taken. This lawful relationship is what allows this photo to serve as a representation of some scene. Furthermore, when you look at that photo, your visual and conceptual systems are able to interpret what is in the image, thereby serving as a process for extracting and using information from the representation.

In order for something to count as a mental representation, of course, the representing world needs to be inside the head. Humans use a variety of external representations, including photos, writing, and a variety of tools. The relationship between humans and the representations they put in their environment is interesting, and I discuss it a bit more at the end of this chapter.

Giving Meaning to Representations

In a computer program, data structures function to store the data that a program is going to manipulate in order to carry out some function. As I discussed earlier, a mental representation plays a role within cognitive systems that is analogous to the role of the data structure in a program. In order for this representation to serve effectively, though, there has to be some way to ensure that the representation captures the right information. We can think of the information that is captured by a particular representation as its meaning .

What are the sources of meaning in mental representations? Obviously, one source of meaning is the set of representing relations that relate the representing world to the represented world. These relations help to ground the representation. As a very simple example, consider a thermostat. One way to build a thermostat is to use a bimetal strip. The two metals that make up the strip expand and contract at different rates when exposed to heat. Thus, the curvature of the strip changes continuously with changes in temperature. In a typical thermostat, the bimetal strip is connected to a vial of conducting liquid (like mercury) that will close an electrical switch between two contacts when the right level of curvature is reached. Within this system, the lawful relationship between air temperature and the curvature of the strip provides a grounding for the representation of temperature.

Many mental representations that get (part of) their meaning through grounding are actually grounded in other representations. For example, research on vision focuses on how people detect the edges of objects in an image. Even these basic processes, however, are making use of other internal states. After all, light hits the retina at the back of the eye, and then the retina turns that light into electrical signals that are sent to the brain. From that point forward, all other computations are performed on information that is already internal to the cognitive system. In some cases, we can trace a chain of representations all the way back out to states in the world through the set of representing relations that link one relation to the next.

The ability to ground representations in actual perceptual states of the world, though, is complicated by the fact that people are able to represent things that are not present at the moment in the environment. That is, not only can I clearly see my dog in front of me when she is in the room, I can also imagine what my dog looks like when I am at work and she is at home. In this case, I may have some representations in my head that are grounded in others, but at some point, there are states of my cognitive system that are not lawfully related to any existing state of the world at that moment. For that matter, I can believe in all sorts of things like ghosts, unicorns, or the Tooth Fairy that do not exist and never have. Thus, there must be other sources of meaning in mental representations.

A proposal for dealing with the fact that not every concept can be defined solely in terms of a lawful representing relationship to another representation comes from philosophical work on conceptual role semantics (Fodor, 1981 ; Stich & Warfield, 1994 ). On this view, a mental representation gets its meaning as a result of the role that it plays within some broader representational system. This proposal acknowledges that the connections among representations may be crucial for their meaning. When I discuss semantic networks in the section on structured representations, we will see how the connections among representational elements influence meaning.

Obviously, not every mental representation in a cognitive system can derive its meaning from its connection to other representational elements (Searle, 1980 ). As an analogy, imagine that you had an English dictionary, but you didn't actually speak English. You could look up a word in the dictionary, and it would provide you the definition using other words. You could look up those words as well, but that would just give you additional definitions that use other words. Without having the sense of how some of the words relate to things that are not part of language (like knowing that dog refers to a certain class of four-legged creatures, or that above is a certain kind of relationship in the world), you would not really understand English. Likewise, at least some mental representations need to be grounded.

Types of Representations

There have been many different proposals for mental representations. In this section, I will catalog three key types of representations: mental spaces, featural representations, and structured representations. I discuss each of these representations in a separate section, but before that I want to discuss some general ways that proposals for representations differ from each other (see also Markman & Dietrich, 2000 ).

First, proposals for representations differ in the presence of discrete symbols. Some representations use continuous spaces. For example, an analog clock represents time using a circle. Every point on that circle has some meaning, though that meaning depends on whether the second hand, minute hand, or hour hand is pointing to that location. In contrast, a digital clock uses symbols to represent time. I will have more to say about symbols when discussing feature representations.

Second, representations differ in whether they include specific connections among the representational elements. Some representations (like the spatial and feature representations I discuss) involve collections of independent representational elements. In contrast, the semantic networks and structured representations discussed later specify relationships among the representational elements.

These two dimensions of difference are generally correlated with the type of representation. In addition, there are dimensions of difference that cross-cut the general proposals for types of representations. For example, representations differ in how enduring the states of the representation are intended to be. Some representations—particularly those that are involved in capturing basic visual information and states of the motor system—are meant to capture moment-by-moment changes in the environment. In contrast, others capture more enduring states.

A final dimension of difference is whether the representation is analog or symbolic. An analog representation is one in which the representing world has the same structure as the represented world. For example, I mentioned that watches represent time using a circle. These watches are analog representations, because both space and time are continuous. Increasing rotational distance in space is used to represent increasing differences in time up to the limit of the time span of the circle.

One advantage of an analog representation is that there are many aspects of the structure of the representing world that can be used to represent relationships in the represented world without having to define them explicitly. Let us return to the example of using angular distance to represent time, when distances are measured on an absolute scale. If one interval is represented by a 90-degree movement and a second is represented by a 180-degree movement, the first interval moves half the distance of the second. Without having to create any additional relations, we can also assume that the first time interval is half the length of the second.

It is rare to find a representing world that has structure that is similar enough to one in a represented world to warrant creating an analog representation. Thus, most representations are symbols : They have an arbitrary relationship between the representing world and the represented world. With symbolic representations, the representing world does not have any structure, and so all of the relationships within the representing world have to be defined explicitly. For example, if we used numbers to represent time, then the representing world of symbols has to define all of the relationships among the symbols for the different numbers to capture relevant relations among numbers in the represented world. For example, the ability to determine that one interval is half the length of another has to be defined into the system, rather than being a part of the structure of the representing world itself.

Finally, it is important to note that an important reason why there are so many different proposals for kinds of knowledge representations is that any choice of a representation makes some processes easy to perform and makes other things difficult to do (see Doumas & Hummel, Chapter 5 ). There is good reason to believe that the cognitive system uses many different kinds of representations in order to provide systems that are optimized for particular tasks that must be carried out (Dale, Dietrich, & Chemero, 2009 ; Dove, 2009 ). Thus, when evaluating proposals about representations, it is probably best to think about what kinds of representations are best suited to a particular process rather than trying to find a way to account for all of cognition with a particular narrow set of representational assumptions. I return to this point at the end of the chapter.

Spatial Representations

Defining spaces.

The first type of representation on our tour uses space as the representing world (Gärdenfors, 2000 ). It might seem strange to think about a mental representation involving space. We clearly use physical spaces to help us represent things all the time. For example, a map creates a two-dimensional layout that we then look at to get information about the spatial relationships in the world. And I have already discussed analog clocks in which angular distance in space is used to represent time.

Space in the outside world has three dimensions. That means that you can put at most three independent (or orthogonal ) lines into space. Once you have those three lines set up in your space, you can create an address for every object in the space using the distance along each of those three dimensions. Mathematically, though, a space can have any number of dimensions. In these high-dimensional spaces, points in space have a coordinate for each dimension that determines its location, just as in the familiar three-dimensional space. Objects can then be represented by points or perhaps regions in space.

Once you have located points in space, it is straightforward to measure the distance between them using the formula

where d(x,y) is the distance between points x and y (each with coordinates for each dimension i, x i and y i ), N is the number of dimensions, and r is the distance metric, sometimes called the Minkowski metric. When measuring distance in a space, the familiar Euclidean straight-line distance sets r = 2. When r = 1, then distance is measured using a “city block” metric in which the distance corresponds to the summed distances along the axes of the space.

It is easy to calculate distance within a space, and so the distance between points becomes an important element in spatial representations. For example, Rips, Shoben, and Smith ( 1973 ) tried to map people's conceptual spaces for simple concepts like birds and animals (see Rips et al., Chapter 11 ). An example of these spaces is shown in Figure 4.2 . The idea here is that pairs of similar birds (like robin and sparrow) are near in space, while pairs of dissimilar birds (like robin and goose) are far away in space. A cognitive process that uses a spatial representation can calculate the distance between concepts when it needs information about the similarity of the items. For example, Rips et al. found that the amount of time that it took to verify sentences like “A robin is a bird” was inversely related to the distance between the points representing the concepts in the space. Consistent with this model, people are faster to verify that the sentence “A robin is a bird” is true than to verify that the sentence “A duck is a bird” is true.

One reason why mental space models of mental representation are appealing is that there are mathematical methods for generating spaces from data about the closeness of items in that space. One technique, called multidimensional scaling (MDS), is the one that Rips et al. used in their study (Shepard, 1962 ; Torgerson, 1965 ). Multidimensional scaling places points in a space based on information about the distances among those points. For example, if you were to give an MDS algorithm the distances among 15 European cities, you would get back a map of the locations of those cities in two dimensions.

A sample semantic space of a set of concepts representing various birds (taken from Rips, Shoben, & Smith, 1973 ).

This same technique can be used for mental distances. For example, Rips et al. had people rate the similarities among the pairs of concepts in Figure 4.2 . Similarity ratings can be interpreted as a measure of mental closeness (see Goldstone & Son, Chapter 10 ). These similarity ratings can be fed into an MDS algorithm. The space that results from this algorithm is the one that best approximates the mental distances it was given. One difficulty with creating spaces like this is that they require a lot of effort from research participants. If there are X items in the space, then there are

distances among those points. That requires a lot of ratings from people, and so in practice it becomes difficult to generate spaces with more than 15 or 20 items in them.

Other techniques have been developed that create very high-dimensional spaces among larger sets of items. A powerful set of techniques uses the co-occurrence relationships among words to develop high-dimensional semantic spaces from corpora of text (Burgess & Lund, 2000 ; Landauer & Dumais, 1997 ). These techniques take advantage of the fact that words with similar meanings often appear along with the same kinds of words in sentences. For example, in the sentence “The X jumped up the stairs” the words that can play the role of X are all generally animate beings.

These techniques can process millions of words from on-line databases of sentences from newspapers, magazines, blog entries, and Web sites. As a result, these techniques can create spaces with hundreds or thousands of dimensions that represent the relationships among thousands of words. The interested reader is invited to explore the papers cited in the previous paragraph for details about how these systems operate. For the present purposes, what is important is that after the space is generated, the high-dimensional distances among these concepts in the space are used to represent differences in meanings among the concepts.

Strengths and Weaknesses of Spatial Models

There are two key strengths of spatial models of representation. On the practical side, the availability of techniques like MDS and methods for generating high-dimensional semantic spaces from text provide modelers with a way of creating representations from the data obtained from subjects in studies. For the other types of representations described in this chapter, it is often more difficult to determine the information that ought to end up in the representations generated as parts of explanations of particular cognitive processes.

A second strength of spatial representations is that the mental operations that can be performed on a space are typically quite efficient. For example, there are very simple algorithms for calculating distance among points in a space, and so the processes that operate on spatial representations are quite easy to carry out. Thus, spatial representations are often used when comparisons have to be made among a large number of items.

Despite these strengths, there are also limitations of spatial representations. The core limitation is that the calculation of distance among points provides a measure of the degree of proximity among points. It is also possible to generate vectors that measure a distance and direction between points. However, the underlying dimensions of a semantic space have no obvious meaning. A modeler looking at the representation may ascribe some meaning to that distance and direction, but the system itself has access only to the vector or distance.

People are also able to focus on specific commonalities and differences among pairs of items, and these properties influence people's assessments of similarity. For example, Tversky and his colleagues (Tversky, 1977 ; Tversky & Gati, 1982 ) explored the commonalities and differences that affect judgments of similarity. As an example of the kinds of items he used, I'll paraphrase a discussion by William James ( 1892 / 1985 ). When people compare the moon and a ball, they find a moderate degree of similarity, because both are round. When people compare the moon and a lamp, they find a moderate degree of similarity, because both are bright. However, when people compare a ball and a lamp, they find no similarity at all, because they don't share any properties. As Tversky ( 1977 ) pointed out, this pattern of similarities is incompatible with a space, because if two pairs of items in a real space are fairly close to each other, then the third pair of points must also be reasonably close. In the mathematical definition of a space, this aspect is called the triangle inequality . The observed pattern of similarity judgments by people reflects that they give a lot of weight to specific commonalities among pairs, though different pairs may focus on different commonalities. Thus, human similarity judgments frequently violate the triangle inequality.

One response to patterns of data that seem inconsistent with spatial representations is to add mechanisms that allow spaces or distances to be modified in response to context (Krumhansl, 1978 ; Nosofsky, 1986 ). However, these mechanisms tend to complicate the determination of distance. Because the simplicity of the computations in a space was one of the key strengths of spatial representations, many other theories have opted to use representations with explicit symbols in them to capture cognitive processes that involve a specific focus on particular representation elements. We turn to representations that consist of specific features in the next section.

Feature Representations

The reason why it is difficult to focus on particular commonalities and differences or particular properties in a spatial representation is that spaces are continuous. The core aspect of a feature representation is that it has discrete elements that make up the representation. These discrete elements allow processes to fix reference to particular items within the representation (Dietrich & Markman, 2003 ).

Typically, feature representations assume that some mental concept is represented by a collection or set of features, each of which corresponds to some property of the items. For example, early models of speech perception assumed that a set of features could be used to distinguish among the various phonemes that make up the sounds of a language (Jakobsen, Fant, & Halle, 1963 ). On this view, the sound /b/ as in bog and the sound /d/ as in dog differ by the presence of a feature that marks where in the mouth the speech sound is produced. Linguists identified the particular features that distinguished phonemes by finding pairs of speech sounds that were as similar as possible except for particular features that led them to be distinguished. To return to the example of /b/ and /d/, these phonemes are similar in other properties like engaging the vocal cords (which would distinguish /b/ from /p/).

When this proposal for phoneme representation was being evaluated, a key aspect of the research program for understanding speech perception involved finding some mapping between the features of speech sounds and the speech signal itself (Blumstein & Stevens, 1981 ). On this view, the speech perception system would identify the features in a particular phoneme from aspects of the audio signal. A particular phoneme would be recognized when the collection of features associated with it was understood. While this process is straightforward, a weakness of this particular approach to speech perception was that it has proven difficult to isolate particular aspects of the speech signal that reliably indicate particular phonetic features.

Featural models have also been used prominently to study similarity (see Goldstone & Son, Chapter 10 ). In particular, Tversky's ( 1977 ) contrast model assumed that objects could be represented as sets of features. Comparing a pair of representations then required elementary set operations. The intersection of the feature sets of a pair were the commonalities of that pair, while the set differences were the differences of the pair. Tversky proposed that people's judgments of similarity should increase with the size of the set of common features and decrease with the size of the sets of distinctive features. He provided support for this model by having people describe various objects by listing their features. He found that people's judgments of the similarity of various pairs was positively related to the number of features that the lists had in common and negatively related to the number of features that were unique to one of the lists.

Some proposals for feature representations augment the features with other information. For example, it is common to include information about the importance of particular features to a category in a representation. In a classic paper, Smith, Shoben, and Rips ( 1974 ) argued that people distinguish between core feature of items and characteristic properties. For example, having feathers is a core characteristic of birds, while singing is typical of birds but is not central for something to be a bird.

They argued that when people classify an object, they look first at all of the properties of the object. If they are uncertain of the category that something belongs to based on its overall similarity, then they focus just on the characteristic properties. That is why people have difficulty classifying objects like dolphins. Dolphins have many features that are characteristic of fish, but they also have features of mammals. Only when people focus on the core characteristics of fish and mammals is it possible to classify a dolphin correctly as a mammal.

Some featural models also include information about the degree of belief in the feature. That is, there are some properties that someone may be certain are true of an object but others for which it is less clear. Some models have proposed that there are certainty factors that allow a system to keep track of the degree of belief in a particular property (see, e.g., Lenat & Guha, 1990 ; Shafer, 1996 ).

While people clearly need information about the degree of belief in a property or the likelihood that the information is true, it is not clear that some kind of marking on features is the best way to handle this kind of information. Often, we want to know more than just how strongly we believe something or how central it is to a category. We want to know why a particular fact is central or what it is that causes us to believe it. To support this kind of reasoning, it is useful to have representations that contain explicit connections among the representational elements. The following section discusses types of representations that capture relationships among representational elements.

Structured Representations

Feature representations do a good job of representing the properties of items but a poor job at representing relationships. Consider a variety of relationships that may exist in the world. Poodles are a kind of dog. John is taller than Mary. Sally outperformed Jack. These kinds of relationships are more than just properties of some item. The way that items are connected (or bound ) to the relationship also matters. Saying that poodles are a kind of dog is true, but saying that dogs are a type of poodle is not.

To capture these relational bindings, structured representations contain mechanisms for creating representations that take arguments that specify the scope of the representation. For example, we can use the notation kind-of(?x,?y) to denote that some item?x is a kind of?y. I precede the letter x with a question mark here to denote that it is a variable that can be filled in with some value. Thus, to specify a particular relation, we fill in values for the variables.

Representations of this type are often called predicates . Once the values are specified, the representation states a proposition , and in a logical system all propositions can be evaluated as true or false, though most proposals for using predicate representations in psychological models do not evaluate the logical form of these predicate structures.

The example kind-of (?x,?y) is a binary predicate, because it takes two arguments. A predicate that takes one argument, like red (?x), is often called an attribute , because it is typically used to describe the properties or attributes of objects. This type of predicate is particularly useful in situations in which there are multiple items in a scene, and it is necessary to bound the scope of the representation.

For example, consider the simple scenes at the top of Figure 4.3 . One depicts a circle on top of a square and the other shows a square on top of a circle. In the left-hand scene, one figure is shaded and another is striped. If we just had a collection of features (as shown in the middle of this figure), then it wouldn't be clear which object was striped and which one was shaded. Indeed, the same collection of features could be used to describe both the left- and right-hand scenes, even though they are clearly different.

Because attributes take arguments, though, it is possible to determine the scope of each representational element. The bottom section of Figure 4.3 shows a structured representation of the same pair of scenes drawn as a graph. The relation above (?x,?y) is presented as an oval with lines connecting the relation to its arguments. Likewise, the rounded rectangles are attributes that are connected to the objects they describe. Using this type of representation, it is possible to specify that it is the circle that is striped in the left-hand scene.

A simple pair of geometric scenes. The middle panel shows that the same set of features can represent each scene. At the bottom, the explicit connections between representations and their arguments allow the scope of each representational element to be defined.

The increase in expressive power that structured representations provide requires that there be processes that are sensitive to this structure. These structure-sensitive processes often require more computational effort than the processes that were specified for spatial and featural representations. In the next two sections, I discuss two different approaches to structured representations that use different processing assumptions to access the connections among representational elements.

Semantic Networks

An early use of structured representations in psychology was semantic networks (Collins & Loftus, 1975 ; Collins & Quillian, 1972 ; Quillian, 1968 ). In a semantic network, objects are represented by nodes in the network. These nodes are connected by links that represent the relations among concepts. The links are directed so that the first argument of a relation points to the second. For example, Figure 4.4 shows a part of a simple semantic network with nodes relating to the concepts vampire and hematologist. This network has a variety of relations in it such as drinks (vampire,blood) and studies (hematologist,blood).

One use of semantic networks was to make simple inferences (Collins & Quillian, 1972 ). One way to make inferences in a semantic network is to use marker passing . In this process, you seek a relationship between a pair of concepts by placing a marker at each of the concepts. The markers are labeled with the concept where they originated. At each step of the process, markers are placed on each node that can be reached from a link that points outward from a node that has a marker on it. When markers from each concept are placed at the same node, then the path back to the original nodes is traced back, and that specifies the relationship between the concepts.

For example, in the network shown in Figure 4.4 , if I wanted to know the relationship between a vampire and a hematologist, I would start by placing markers at the vampire and hematologist nodes. At the first time step, a marker from vampire would be placed on the monster, cape, and blood nodes. A marker from hematologist would be placed at the doctor, lab coat, water, and blood nodes. The presence of markers from each of the starting concepts at the blood node would lead to the conclusion that vampires drink blood, while hematologists study blood. The amount of time that it takes to make the inference depends on the number of time steps it takes to find an intersection between paths emerging from each concept.

A second process often used in semantic networks is spreading activation (e.g., Anderson, 1983 ; Collins & Loftus, 1975 ). Spreading activation theories assume that there are semantic networks consisting of nodes and links, though they do not require the links to be directed. Unlike the marker passing process, activation can spread in both directions along a link.

Each node has some level of activation that determines how accessible that concept is in memory. When a concept appears in the world or in a discourse or in a sentence, then the node for that concept temporarily gets a boost in its activation. That activation then spreads across the links and activates neighboring concepts. Models of this sort have been used to explain priming effects in which processing of one concept speeds processing of related concepts. For example, a classic finding in the priming literature is that seeing a word (e.g., doctor ) speeds the identification of semantically related words like nurse (Meyer & Schvaneveldt, 1971 ).

A simple semantic network showing concepts relating to vampires and hematologists (drawn after Markman, 1999 ).

Spreading activation theories have been augmented with a number of mechanisms to help them account for more subtle experimental results. For example, links in a network may vary in their strength, which is consistent with the idea that concepts differ in their strength of association. One interesting addition to spreading activation models is the concept of fan (Anderson, 1983 ). The fan of a node in a network is the number of links that leave it, which differs for each node. In some models, the total amount of activation that is spread from one node to others is divided by the number of links, so that nodes with a low fan provide more activation to neighboring nodes than do nodes with high fan. This mechanism captures the regularity that when a node has low fan, then the presence of one concept strongly predicts the presence of the small number of other concepts to which it is connected. In contrast, when a node has a high fan, the presence of one concept doesn't predict the presence of another concept all that strongly.

Semantic network models have been used primarily as models of the relationships among concepts in memory. These processing mechanisms are also shared with interactive activation models that have been used to account for a variety of psychological phenomena (see, e.g., McClelland & Rumelhart, 1981 ; Read & Marcus-Newhall, 1993 ; Thagard, 1989 , 2000 ). There is much more that can be done with the relational structure in representations than just passing markers or activation among nodes. I discuss some additional aspects of structured representations in the next section.

Structured Relational Representations

A variety of aspects of psychological functioning seem to rely on people's ability to represent and reason with relations. For example, language use seems to depend crucially on the ability to use relations. Verbs bind together the actors and objects that specify actions (e.g., Gentner, 1975 , 1978 ; Talmy, 1975 ). Prepositions allow people to talk about spatial relationships (e.g., Landau & Jackendoff, 1993 ; Regier, 1996 ; Talmy, 1983 ). For both verbs and prepositions, the objects that they relate can be specified in sentences that essentially fill in the arguments to these relations.

Related to our ability to talk about complex relations is the ability to reason about the causal structure of the world. Obviously, many verbs focus on how events in the world are caused and prevented (Wolff, 2007 ; Wolff & Song, 2003 ). These verbs reflect that people are adept at understanding why events occur. For example, Schank and colleagues (Schank, 1982 ; Schank & Abelson, 1977 ) proposed that people form scripts and schemas to represent complex events like going to a restaurant or going to a doctor's office. These knowledge structures contain relationships among the components of an event that suggest the order that things typically happen. They also have causal relations among the events that explain why they are performed (see Buehner & Cheng, Chapter 12 ).

These causal relations are particularly important for helping people to reason about situations that do not go as expected. For example, when visiting a restaurant, the waiter typically brings you a menu. Thus, you expect to get a menu when you are seated. You also know that you get a menu, because it is necessary to know what the restaurant serves so that you can order food. If you do not get a menu, then you might start to look around for other sources of information about what the restaurant serves like a board posted on a wall.

This range of relations creates an additional burden, because processes have to be developed to make use of this structure. To provide an example of both the great power of these processes as well as their computational burden, I describe work on analogical reasoning. The study of analogical reasoning is a great cognitive science success story, because there is widespread agreement on the general principles underlying analogy, even if there is disagreement about some of the fine details of how analogy is accomplished (Falkenhainer, Forbus, & Gentner, 1989 ; Gentner, 1983 ; Holyoak & Thagard, 1989 ; Hummel & Holyoak, 2003 ; Keane, 1990 ; see Holyoak, Chapter 13 ).

Analogies involve people's ability to find similarities between domains that are not similar on the surface. To find these nonliteral similarities, people seek commonalities in the relations in two domains. For example, a classic example of an analogy is the comparison that the atom is like the solar system. This analogy came out of the Rutherford Model of the atom, which was prominent in the early 20th century. The domain that people know most about is typically called the base of source (in this case, the solar system 1 ), while the domain that is lessunderstood is called the target (in this case, the atom). Atoms are not like solar systems because of the way they look. Atoms are small, and solar systems are large. The nucleus of an atom is not hot like the sun. There are no electrons that support life like planets do. What is similar between these domains is that the electrons of the atom revolve around the nucleus in the same way that the planets revolve around the sun.

Theories of analogy assume that people represent information with structured relational representations. So we could think of the solar system as being represented with some simple relations like

revolve-around (electron, nucleus) and greater (mass (nucleus), mass (electron))

which reflect that the electron revolves around the nucleus and that the mass of the nucleus is greater than the mass of the electron. Then, the solar system could be represented by a more elaborate relational system like

cause (greater (mass (sun), mass (planet), revolve-around (planet, sun))

These representations might also have a lot more descriptive information about the attributes of the nucleus, electrons, sun, and planets.

The process of analogical mapping seeks parallel relational structures. It does so by first matching up relations that are similar. So the revolve-around (?x,?y) relation in the representation of the solar system would be matched to the revolve-around (?x,?y) relation in the representation of the atom. Once relations are matched, the arguments of those relations are also placed in correspondence. This constraint on analogy is called parallel connectivity (Gentner, 1983 ), and it is incorporated into many models of analogical reasoning (Falkenhainer et al., 1989 ; Holyoak & Thagard, 1989 ; Hummel & Holyoak, 1997 ; Keane, 1990 ). So the electron in the atom and the planet in the solar system are matched, because both revolve around something (i.e., they are the first argument in the matched revolve-around (?x,?y) relation). Similarly, the nucleus of the atom and the sun in the solar system are matched because both are being revolved around. As many matching relations between domains are found as possible, provided that each object in one domain is matched to at most one object in the other domain (Gentner's, 1983 , one-to-one mapping constraint).

Analogical mapping processes also allow one domain to be extended based on the comparison to another. In this process of analogical inference, relations from the base that are consistent with the correspondence between the base and target can be carried over to the target. For example, in the simple representations of the atom and the solar system shown earlier, the match between the domains licenses the inference that the electron revolves around the nucleus because the nucleus is more massive than the electron. This example also demonstrates that inferences drawn from analogies may be plausible, but they need not be true.

One final point to make about analogy is that the structure in representations helps to define which information in a comparison is salient. In particular, some correspondences between domains may match up a series of independent relations that happen to be similar across domains. However, some relations take other relations as arguments. For example, the cause (?x,?y) relation in the solar system representation takes two other relations as arguments. Analogies can be based on similarities among entire systems of relations, some of which have embedded relations as arguments. Gentner's ( 1983 ) systematicity principle suggests that mappings that capture similarities in relational systems are considered to be particularly good analogies (Clement & Gentner, 1991 ).

This discussion of analogy raises two important general points about structured representations. First, the structure mapping process is complex. Compared to the comparison processes for spatial and featural representations, there are more constraints that have to be specified to describe the structure mapping process. The representation provides few constraints on the nature of the process, and so significant effort has to be put into setting up appropriate processes that act on the representation.

Second, structural alignment is much more computationally intensive than either distance calculations or feature comparison (see Falkenhainer et al., 1989 for an analysis of the computational complexity of structure mapping). Generally speaking, structure-sensitive processes are more computationally intensive than those that can operate on spatial and featural representations. Thus, they are most appropriate for models of processes for which significant time and processing resources are available.

This concludes our brief tour of types of representations. It is not possible to do justice to all of these types of representations in a chapter of this length. I have discussed all of these representations in more detail elsewhere (Markman, 1999 , 2002 ).

Broader Issues About Representation

In the rest of this chapter, I focus on some broader issues and open questions in the area of knowledge representation in the field. I start with an exploration of the way that knowledge is treated in cognitive psychology and how that differs from the treatment of knowledge in other areas of psychology. Then, I discuss some approaches to representation that have focused on the importance of the physical body and of the context when thinking about mental representation. Finally, I discuss a stream of research in the field that has argued that the concept of representation has outlived its usefulness.

Content and Structure

One thing you may have noticed about this entire discussion about knowledge representation is that it has focused on the structure of the knowledge people have without regard to the content of what they know. Spatial representations, featural representations, and structured representations differ in their assumptions about how knowledge is organized. Models of this type have been used in theories that range from vision to memory to reasoning.

It is not logically necessary that the discussion of knowledge representation be organized around the structure of knowledge. Developmental psychology, for example, focuses extensively on the content of children's knowledge (e.g., Carey, 2010 ). It is quite common to see discussions within developmental psychology that focus on what information children possess and the ages at which they can use that knowledge to solve particular kinds of problems.

Cognitive psychology, however, emerged out of the cognitive revolution. This view of mind is dominated by computation. A computer does not care what it is reasoning about. Right now, my computer is executing the commands to power my word processor so that I can write this sentence. But the computer does not really care what this chapter is about. As long as the data structures within the program function seamlessly with the program, the word processor will do its job. Similarly, cognitive theories have focused on the format of mental data structures with little regard for the content of those structures.

In many areas, though, research in cognitive psychology will need to pay careful attention to content. Because most studies in the field have focused on college undergraduates, research has typically explored people with no particular expertise in the domain in which they are being studied. There is good reason to believe, though, that experts reason differently from novices in a variety of domains, because of what they know (see e.g., Klein, 2000 ; Bassok & Novick, Chapter 21 ). Thus, future progress on core aspects of thinking will require attention both to the content of people's knowledge as well as its structure. There are some exceptions to this generalization, such as the work on pragmatic reasoning schemas (Cheng & Holyoak, 1985 ) and research that has been done on expertise, but studies incorporating the content of what people know are much more the exception than the rule in the field.

Embodied and Situated Cognition

The computational view of mind that dominated cognitive psychology had another key consequence. There has been a pervasive (if implicit) assumption that the sensory systems provide information to the mind and the mind in turn suggests actions that can be carried out by the motor system. More recently, this assumption has been challenged.

One approach challenges the assumption that cognition is somehow separate from perception and motor control (Barsalou, 1999 ; Glenberg, 1997 ; Wilson, 2002 ). This view, which is often called embodied cognition , suggests that understanding cognitive processing requires reorienting the view of mind from the assumption that the mind exists to process information to the assumption that cognition functions to guide action.

An early version of this view came from Gibson's ( 1986 ) work on vision. A dominant view of vision in the 1970s was that the goal of vision was to furnish the cognitive system with a veridical representation of the three-dimensional world (Marr, 1982 ). Gibson argued that the primary goal of vision was to support the goals of an organism. Consequently, vision should be biased toward information that suggests to an animal how it should act. He argued that people tend to see objects not in terms of their raw visual properties, but rather in terms of their affordances , that is, with respect to the actions that can be performed on an object. On his view, when seeing a chair, we immediately calculate whether we think we could sit on it, stand on it, or pick it up.

More recent approaches to embodied cognition have provided an important correction to the field, by emphasizing the role of perception and action on cognition. This work demonstrates influences of perception on higher level thinking and also the influence of thinking on perception. For example, Wu and Barsalou (2009) gave people different perceptual contexts and showed how the context influenced their beliefs about the properties of objects. For example, when people list properties of a lawn, they typically talk about the grass and the greenness of the lawn. When they talk about a “rolled-up lawn,” though, they then talk about the roots, even though the grass in a lawn must also have roots. Findings such as this one suggest that people are able to generate simulations of what something would look like and then to use that information in conceptual tasks.

Conceptual processing may also influence perception. A number of studies, for example, have demonstrated that people's perception of the slope of a hill is influenced by the perception of how hard it would be to climb up the hill. For example, people wearing a heavy backpack see a hill as steeper than those who are not wearing a heavy backpack (Proffitt, Creem, & Zosh, 2001 ). However, having a friend with you when wearing a heavy backpack makes the hill look less steep than it would look if you were alone. So your beliefs about the amount of social support you have can affect general perception.

The implication of this work on embodied cognition is that there is no clean separation between the mental representations involved in perception, cognition, and action. Instead, the cognitive system makes use of a variety of types of information. Even tasks that would seem on the surface to involve abstract reasoning often use perceptual and motor representations as well (see Goldin-Meadow & Cook, Chapter 32 , for a review of the role of gesture in thought).

A related area of work is called situated cognition (Hutchins, 1995 ; Suchman, 1987 ). Situated cognition takes as its starting point the recognition that human thinking occurs in particular contexts. The external world in that context also plays a role in people's thought processes. Humans use a variety of tools to help structure difficult cognitive tasks. For example, because human memory is fallible, we make lists to remind us of information we might forget otherwise. Hutchins ( 1995 ) examined the way that navigators aboard large navy ships navigate through tight harbors. He found that—while it is possible to think about the abstract process of navigation—the specific way that navigation teams work is structured by the variety of tools that they have to perform the task.

The embodied and situated cognition approaches have broadened the study of cognition by making clear that a variety of representations both inside and outside the head influence thinking. The initial work in this area focused on demonstrations of the role of perception, action, and tools on cognitive processes. Ongoing research now focuses on how these types of representations coordinate with the kinds of representations more traditionally studied within cognitive psychology.

Antirepresentationalism and a Call for Pluralism

Despite the centrality of the concept of representation within cognitive psychology, some theorists have argued that the computational approach to mind has outlived its usefulness and thus that cognitive psychology should dispense with the concept of representation (Port & Van Gelder, 1995 ; Spivey, 2007 ; Thelen & Smith, 1994 ). There is an important insight underlying this claim. Much of the research, particularly in the period from the 1960s through the mid-1990s assumed that people did quite a bit of thinking before they performed any actions. For example, a classic model of how intelligent systems could plan actions assumed that people were able to generate complex plans that were then passed to a second system that carried out the plan (Fikes & Nilsson, 1971 ).

It is clear, however, that people coordinate their actions with the world. While it is often useful to have a general plan for how to carry out a task, that plan needs to be adapted to suit the constraints of the particular environment in which it is carried out. For example, it is fine to set a route to drive from your house to a friend's house, but if a road is closed for construction, then the route will have to be changed along the way to take into account the actual state of the world.

Research on robotics showed that quite a bit of sophisticated behavior could be generated by having robots that were sensitive primarily to what is going on in the environment at that moment (Beer, 1995 ; Brooks, 1991 ; Pfeifer & Scheier, 1999 ). These systems had representations that would satisfy the minimal criteria for a representation that I described earlier, but they did not keep track of any enduring states of the world that would support complex planning. Nonetheless, these robots could achieve simple goals like traversing a hallway while still avoiding obstacles and adapting to changing circumstances. The initial success of these systems suggested that complex representations might play a minimal role in actual cognitive systems.

As with the work on embodied cognition and situated cognition, these antirepresentationalist approaches have added valuable new insights to the study of thought. It is clear that paying attention to the way that behavior is carried out on-line is important for helping to constrain what a cognitive system is trying to accomplish. A model that assumes a complete plan can be generated and then passed off to modules that will execute that plan is simply not reasonable as a model of cognition.

At the same time, it is also clear that the hard work of generating theories in cognitive psychology will come from understanding how the variety of approaches to representation are coordinated in systems that carry out a number of complex behaviors. Important work remains to be done on the interfaces among representational systems.

For example, Kuipers ( 2000 ) has focused on the question of how to get intelligent agents to navigate through complex spatial environments. His system has a level of representation that responds dynamically to changing environmental conditions. It also has representations that generate general plans for how to get from one location in a large-scale environment to another. In between these levels of representation, the system has representations that generate specific programs for an agent's motor system to suggest how it should go about travelling from one location to the next. This system has been implemented successfully in robots that navigate through environments.

The success of systems such as the one I just described suggests that ultimately cognitive theories need to embrace representational pluralism (Dale et al., 2009 ; Dove, 2009 ; Markman & Dietrich, 2000 ). Each of the systems of representation discussed in this chapter has strengths and weaknesses. Some representations—like spatial representations—are good for carrying information about moment-by-moment states of the world. These representations support simple processes that can act quickly in a changing environment. Other representations excel at storing information for the long term and for creating abstractions across many instances. For example, structured representations support the generation of relational systems that can store the essence of the similarities of domains that seem distant on the surface.

Each form of representation seems particularly well suited to specific kinds of cognitive tasks. There is a tendency when generating theories in science to want to focus on a single approach to explanation. Parsimony is often a desirable characteristic of theories, and so a theory that posits only one form of representation would seem better than a theory that posits multiple forms of representation. But the mind is likely to make use of a variety of forms of representation. We have a large number of cognitive mechanisms that have evolved over millions of years to accommodate a number of disparate cognitive abilities. There is no reason to think that the representations and processes that are best for understanding low-level vision or basic motor control are also going to be appropriate for handling the kinds of deeply embedded syntactic structures that people encounter when reading academic prose. Thus, rather than trying to create theories that focus on only a single kind of representation, cognitive scientists need to become conversant with many different approaches to representation. The key to developing a successful cognitive model will generally involve finding a set of representations and processes that support robust intelligent behavior.

Future Directions

How can we incorporate more research on the content of people's representations into the study of knowledge representation?

How do representations generated from states of the physical body and aspects of the environment interact with representations of more abstract states?

How can we incorporate and coordinate multiple representations within a single model?

Some models of analogy refer to the base as the source (e.g., Holyoak & Koh, 1987 ).

Anderson, J. R. ( 1983 ). A spreading activation theory of memory.   Journal of Verbal Learning and Verbal Behavior , 22 , 261–295.

Barsalou, L. W. ( 1999 ). Perceptual symbol systems.   Behavioral and Brain Sciences , 22 (4), 577–660.

Beer, R. D. ( 1995 ). Computational and dynamical languages for autonomous agents. In R. F. Port & T. V. Gelder (Eds.), Mind as motion (pp. 121–148). Cambridge, MA: The MIT Press.

Google Scholar

Google Preview

Blumstein, S. E., & Stevens, K. N. ( 1981 ). Phonetic features and acoustic invariance in speech.   Cognition , 10 , 25–32.

Brooks, R. A. ( 1991 ). Intelligence without representation.   Artificial Intelligence , 47 , 139–159.

Burgess, C., & Lund, K. ( 2000 ). The dynamics of meaning in memory. In E. Dietrich & A. B. Markman (Eds.), Cognitive dynamics (pp. 117–156). Mahwah, NJ: Erlbaum.

Carey, S. ( 2010 ). The origin of concepts . New York: Oxford University Press.

Cheng, P. W., & Holyoak, K. J. ( 1985 ). Pragmatic reasoning schemas.   Cognitive Psychology , 17 , 391–416.

Clement, C. A., & Gentner, D. ( 1991 ). Systematicity as a selection constraint in analogical mapping.   Cognitive Science , 15 , 89–132.

Collins, A. M., & Loftus, E. F. ( 1975 ). A spreading-activation theory of semantic priming.   Psychological Review , 82 (6), 407–428.

Collins, A. M., & Quillian, M. R. ( 1972 ). How to make a language user. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 309–351). New York: Academic Press.

Dale, R., Dietrich, E., & Chemero, A. ( 2009 ). Explanatory pluralism in cognitive science.   Cognitive Science , 33 (5), 739–742.

Dietrich, E., & Markman, A. B. ( 2003 ). Discrete thoughts: Why cognition must use discrete representations.   Mind and Language , 18 (1), 95–119.

Dove, G. ( 2009 ). Beyond perceptual symbols: A call for representational pluralism.   Cognition , 110 , 412–431.

Falkenhainer, B., Forbus, K. D., & Gentner, D. ( 1989 ). The structure-mapping engine: Algorithm and examples.   Artificial Intelligence , 41 (1), 1–63.

Fikes, R. E., & Nilsson, N. J. ( 1971 ). STRIPS: A new approach to the application of theorem-proving to problem-solving.   Artificial Intelligence , 2 , 189–208.

Fodor, J. A. ( 1981 ). Representations: Philosophical essays on the foundations of cognitive science . Cambridge, MA: The MIT Press.

Gärdenfors, P. ( 2000 ). Conceptual spaces: The geometry of thought . Cambridge, MA: The MIT Press.

Gentner, D. ( 1975 ). Evidence for the psychological reality of semantic components: The verbs of possession. In D. A. Norman & D. E. Rumelhart (Eds.), Explorations in cognition (pp. 211–246). San Francisco, CA: W.H. Freeman.

Gentner, D. ( 1978 ). On relational meaning: The acquisition of verb meaning.   Child Development , 49 , 988–998.

Gentner, D. ( 1983 ). Structure-mapping: A theoretical framework for analogy.   Cognitive Science , 7 , 155–170.

Gibson, J. J. ( 1986 ). The ecological approach to visual perception . Hillsdale, NJ: Erlbaum.

Glenberg, A. M. ( 1997 ). What memory is for.   Behavioral and Brain Sciences , 20 (1), 1–55.

Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. ( 1986 ). Induction: Processes of inference learning and discovery . Cambridge, MA: The MIT Press.

Holyoak, K. J., & Koh, K. ( 1987 ). Surface and structural similarity in analogical transfer.   Memory and Cognition , 15 (4), 332–340.

Holyoak, K. J., & Thagard, P. ( 1989 ). Analogical mapping by constraint satisfaction.   Cognitive Science , 13 (3), 295–355.

Hummel, J. E., & Holyoak, K. J. ( 1997 ). Distributed representations of structure: A theory of analogical access and mapping.   Psychological Review , 104 (3), 427–466.

Hummel, J. E., & Holyoak, K. J. ( 2003 ). A symbolic-connectionist theory of relational inference and generalization.   Psychological Review , 110 (2), 220–264.

Hutchins, E. ( 1995 ). Cognition in the wild . Cambridge, MA: The MIT Press.

Jakobsen, R., Fant, G., & Halle, M. ( 1963 ). Preliminaries to speech analysis . Cambridge, MA: The MIT Press.

James, W. ( 1892 /1985). Psychology: The briefer course . South Bend, IN: University of Notre Dame Press.

Keane, M. T. G. ( 1990 ). Incremental analogizing: Theory and model. In K. J. Gilhooly, M. T. G. Keane, R. H. Logie, & G. Erdos (Eds.), Lines of thinking (Vol. 1, pp. 221–235). London: John Wiley and Sons.

Klein, G. ( 2000 ). Sources of power . Cambridge, MA: The MIT Press.

Krumhansl, C. L. ( 1978 ). Concerning the applicability of geometric models to similarity data: The interrelationship between similarity and spatial density.   Psychological Review , 85 (5), 445–463.

Kuipers, B. ( 2000 ). The spatial semantic hierarchy.   Artificial Intelligence , 119 , 191–233.

Landau, B., & Jackendoff, R. ( 1993 ). “What” and “where” in spatial language and spatial cognition.   Behavioral and Brain Sciences , 16 (2), 217–266.

Landauer, T. K., & Dumais, S. T. ( 1997 ). A solution to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge.   Psychological Review , 104 (2), 211–240.

Lenat, D., & Guha, R. V. ( 1990 ). Building large knowledge-based systems . San Francisco, CA: Addison Wesley Publishers, Inc.

Markman, A. B. ( 1999 ). Knowledge representation . Mahwah, NJ: Erlbaum.

Markman, A. B. ( 2002 ). Knowledge representation. In D. L. Medin & H. Pashler (Eds.), Stevens handbook of experimental psychology (3rd ed., Vol. 2, pp. 165–208). New York: Wiley.

Markman, A. B., & Dietrich, E. ( 2000 ). In defense of representation.   Cognitive Psychology , 40 (2), 138–171.

Marr, D. ( 1982 ). Vision . New York: W.H. Freeman and Company.

McClelland, J. L., & Rumelhart, D. E. ( 1981 ). An interactive activation model of context effects in letter perception: Part I, An account of basic findings.   Psychological Review , 88 , 375–407.

Meyer, D. E., & Schvaneveldt, R. W. ( 1971 ). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations.   Journal of Experimental Psychology , 90 (2), 227–234.

Nosofsky, R. M. ( 1986 ). Attention, similarity and the identification-categorization relationship.   Journal of Experimental Psychology: General , 115 (1), 39–57.

Palmer, S. E. ( 1978 ). Fundamental aspects of cognitive representation. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 259–302). Hillsdale, NJ: Erlbaum.

Pfeifer, R., & Scheier, C. ( 1999 ). Understanding intelligence . Cambridge, MA: The MIT Press.

Port, R. F., & Van Gelder, T. (Eds.). ( 1995 ). Mind as motion . Cambridge, MA: The MIT Press.

Proffitt, D. R., Creem, S. H., & Zosh, W. D. ( 2001 ). Seeing mountains in molehills: Geographical-slant perception.   Psychological Science , 12 (5), 418–423.

Pylyshyn, Z. W. ( 1980 ). Computation and cognition: Issues in the foundations of cognitive science.   Behavioral and Brain Sciences , 3 (1), 111–169.

Quillian, M. R. ( 1968 ). Semantic memory. In M. Minsky (Ed.), Semantic information processing (pp. 216–260). Cambridge, MA: The MIT Press.

Read, S. J., & Marcus-Newhall, A. ( 1993 ). Explanatory coherence in social explanations: A parallel distributed processing account.   Journal of Personality and Social Psychology , 65 (3), 429–447.

Regier, T. ( 1996 ). The human semantic potential . Cambridge, MA: The MIT Press.

Rips, L. J., Shoben, E. J., & Smith, E. E. ( 1973 ). Semantic distance and the verification of semantic relations.   Journal of Verbal Learning and Verbal Behavior , 12 , 1–20.

Schank, R. C. ( 1982 ). Dynamic memory . New York: Cambridge University Press.

Schank, R. C., & Abelson, R. ( 1977 ). Scripts, plans, goals and understanding . Hillsdale, NJ: Erlbaum.

Searle, J. R. ( 1980 ). Minds, brains, and programs.   Behavioral and Brain Sciences , 3 (3), 417–424.

Shafer, G. ( 1996 ). The art of causal conjecture . Cambridge, MA: The MIT Press.

Shepard, R. N. ( 1962 ). The analysis of proximities: Multidimensional scaling with an unknown distance function, I.   Psychometrika , 27 (2), 125–140.

Smith, E. E., Shoben, E. J., & Rips, L. J. ( 1974 ). Structure and process in semantic memory: A featural model for semantic decisions.   Psychological Review , 81 , 214–241.

Spivey, M. ( 2007 ). The continuity of mind . New York: Oxford University Press.

Stich, S. P., & Warfield, T. A. (Eds.). ( 1994 ). Mental Representation . Cambridge, MA: Blackwell.

Suchman, L. A. ( 1987 ). Plans and situated actions: The problem of human-machine communication . New York: Cambridge University Press.

Talmy, L. ( 1975 ). Semantics and syntax of motion. In J. Kimball (Ed.), Syntax and semantics (Vol. 4, pp. 181–238). New York: Academic Press.

Talmy, L. ( 1983 ). How language structures space. In H. L. Pick & L. P. Acredolo (Eds.), Spatial orientation: Theory, research, and application (pp. 225–282). New York: Plenum Press.

Thagard, P. ( 1989 ). Explanatory coherence.   Behavioral and Brain Sciences , 12 , 435–502.

Thagard, P. ( 2000 ). Coherence in thought and action . Cambridge, MA: The MIT Press.

Thelen, E., & Smith, L. B. ( 1994 ). A dynamic systems approach to the development of cognition and action . Cambridge, MA: The MIT Press.

Torgerson, W. S. ( 1965 ). Multidimensional scaling of similarity.   Psychometrika , 30 (4), 379–393.

Tversky, A. ( 1977 ). Features of similarity.   Psychological Review , 84 (4), 327–352.

Tversky, A., & Gati, I. ( 1982 ). Similarity, separability and the triangle inequality.   Psychological Review , 89 (2), 123–154.

Wilson, M. ( 2002 ). Six views of embodied cognition.   Psychonomic Bulletin and Review , 9 (4), 625–636.

Wolff, P. ( 2007 ). Representing causation.   Journal of Experimental Psychology: General , 136 (1), 82–111.

Wolff, P., & Song, G. ( 2003 ). Models of causation and the semantics of causal verbs.   Cognitive Psychology , 47 , 276–332.

Wu, L. L., & Barsalou, L. W. ( 2009 ). Perceptual simulation in conceptual combination: Evidence from property generation.   Acta Psychologica , 132 , 173–189.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Yearb Med Inform
  • v.28(1); 2019 Aug

Logo of ymi

The Interplay of Knowledge Representation with Various Fields of Artificial Intelligence in Medicine

Laszlo balkanyi.

1 Knowledge Manager, European Centre of Disease Prevention and Control (retired)

Ronald Cornet

2 Associate Professor, Department of Medical Informatics, Academic Medical Center - University of Amsterdam, Amsterdam Public Health research institute, Amsterdam, The Netherlands

Introduction : Artificial intelligence (AI) is widespread in many areas, including medicine. However, it is unclear what exactly AI encompasses. This paper aims to provide an improved understanding of medical AI and its constituent fields, and their interplay with knowledge representation (KR).

Methods : We followed a Wittgensteinian approach (“meaning by usage”) applied to content metadata labels, using the Medical Subject Headings (MeSH) thesaurus to classify the field. To understand and characterize medical AI and the role of KR, we analyzed: (1) the proportion of papers in MEDLINE related to KR and various AI fields; (2) the interplay among KR and AI fields and overlaps among the AI fields; (3) interconnectedness of fields; and (4) phrase frequency and collocation based on a corpus of abstracts.

Results : Data from over eighty thousand papers showed a steep, six-fold surge in the last 30 years. This growth happened in an escalating and cascading way. A corpus of 246,308 total words containing 21,842 unique words showed several hundred occurrences of notions such as robotics, fuzzy logic, neural networks, machine learning and expert systems in the phrase frequency analysis. Collocation analysis shows that fuzzy logic seems to be the most often collocated notion. Neural networks and machine learning are also used in the conceptual neighborhood of KR. Robotics is more isolated.

Conclusions : Authors note an escalation of published AI studies in medicine. Knowledge representation is one of the smaller areas, but also the most interconnected, and provides a common cognitive layer for other areas.

1 Introduction

Artificial intelligence (AI) is becoming increasingly important and its impact is manifold - at least potentially -, but its exact scope is unclear. In this paper we aim to increase understanding of the structure of medical AI as a field of applied science 1 , by investigating the interaction of its constituent fields. The interplay among various fields is studied specifically from the point of view of knowledge representation 1 . Our first objective is to shed light on what exactly AI encompasses, as seen in the medical research literature. The second objective is to analyze how the notion of knowledge representation (KR ) interacts with various fields of AI , i.e., how KR contributes to other fields of AI, and how these contribute to KR. The analysis of the relationships of these fields helps to understand the trends.

2 Background

When addressing AI from a knowledge-representation perspective, an obvious first task is to assess what exactly AI encompasses. Literature does not provide a widely accepted classification or a structural model of (medical) AI and its constituents. Many definitions and descriptions of AI exist, well summarized for example by 2 , but authors think these might not add much to the concept of medical AI for an (already) interested reader. Similarly, there is no single authoritative reference classification of the constituent fields of AI. Library science tools, including catalogue classification systems like DDC (Dewey Decimal Classification), UDC (Universal Decimal Classification), LCC (The Library of Congress Classification) 3 , are of no avail as they don’t provide subcategories. Within the medical and health domain of AI, the hierarchy of Medical Subject Headings (MeSH) provides a good, pragmatic classification of medical AI-related research papers 4 , which is shown in Figure 1 . The definitions of MeSH terms are given in Table 1 .

An external file that holds a picture, illustration, etc.
Object name is 10-1055-s-0039-1677899-ibalkanyi-1.jpg

Full MeSH hierarchy of ‘artificial intelligence’ - MeSH version 2018, October. The MeSH hierarchy levels denoted in italics are used in this paper to investigate the relations of AI fields with KR.

3 Materials and Methods

To achieve our goals we use descriptive metadata, i.e. keywords assigned by authors of papers published in this field and the Medical Subject Headings (MeSH) indexing terms assigned by MEDLINE indexers. We used MEDLINE to retrieve papers, as it consistently specifies MeSH headings. Our method followed these steps:

  • Analysis of the proportion of papers in MEDLINE characterized by relevant content metadata for various fields of medical AI. We used PubMed-by-Year 5 to investigate publication frequencies of various medical AI fields over time. This tool is used to visualize the relative proportion of cited publications, tagged by the relevant MeSH index terms. It compares the results for each year to the database as a whole. By entering multiple searches, we may have the results displayed in parallel.
  • Determining the interplay between KR and the respective fields of AI by checking co-occurrence of content metadata, as well as detecting interplay among various AI fields themselves. PubVenn 6 was used in this step, a tool that enables PubMed to convert search terms into codified search. As content metadata, i.e., as search terms, we combined our keywords with a series of MeSH terms. PubVenn produces a Venn diagram showing interaction between various AI fields, and provides extraction of the numbers and bibliographic data of citations in the overlapping areas of the Venn diagram.
  • Visualization of the interconnectedness, using NodeXL 7 an open-source template for Microsoft® Excel® that makes network graphs.
  • Analysis of a limited corpus of medical AI abstracts. A corpus containing the abstracts of the most relevant first thousand papers was established. Relevance was decided according to PubMed “Best Match” ordering. All the abstracts of papers, having MeSH classified “artificial intelligence” keywords, AND the ones, keyworded by authors as “knowledge representation”, were added to the corpus. We used ANTConc 8 to perform phrase frequency and collocation analysis of content metadata labels used as notion labels (words) in the corpus text for better understanding the interplay among fields in general, and between fields and KR.

All search results are based on the numbers extracted from a snapshot of a search performed in October 2018. In order to retrieve papers including knowledge representation and the respective fields in AI, we used a simple search construct: pairs of authors’ keyword ‘ knowledge representation’ and the labels of MeSH index terms pertaining to AI, as shown in Figure 1 . In the same way, we retrieved the MeSH index term pairs, using the same simple search construct e.g., “Biological Ontologies”[All] AND “Natural Language Processing”[All]. As keywords are limited, and may not address all relevant aspects of indexed papers, we exploited text mining to gain insight into the frequencies of phrases that relate to the content descriptive metadata labels. Text mining on the abstracts of the papers followed a Wittgensteinian approach: interpreting the “meaning by usage” - the usage of the content metadata notions as words, referring to AI. 9 . We analyzed occurrences (phrase frequencies and collocation) of content descriptive metadata element labels as words used in the text of papers. We think that this work would provide a deeper understanding of the underlying conceptual structure of the field in research. To this end, a corpus was created consisting of the title, keywords, and abstract of the first 1,000 articles according to PubMed “Best Match” order.

Figure 2 shows the growth of various MeSH-defined AI fields as proportions of MEDLINE-indexed publications. The data regarding “ knowledge representation ” in MEDLINE were collected with the same search query formalism as the queries for those AI fields for which MeSH terms exist. In the last thirty years, research intensified significantly and the growth started in the eighties. The ratio of AI-related research output to all MEDLINE-indexed publications is presently about six times as much as it was at the beginning of the eighties. Some areas like (artificial) neural networks started to grow almost exponentially in the nineties - seemingly levelling out over time, after the year 2000. Other areas like machine learning show very steep growth in the last decade. There is a steady growth in the area of expert systems . The area of knowledge bases (KB) research started to grow with more research on biological ontologies understood by MeSH as a subcategory of KBs. More details and an actualized version with latest data are here: https://goo.gl/j4fvi4

An external file that holds a picture, illustration, etc.
Object name is 10-1055-s-0039-1677899-ibalkanyi-2.jpg

Proportion of MEDLINE-indexed literature on nine areas of AI over time. The envelope curve is not data based but illustrates the cascading growth, the escalation of research.

The changes (and more specifically their relations to knowledge representation ) are further analyzed in this paper by text mining the relevant literature. The results are presented below in two steps.

Step 1: Investigation of the overall interaction among KR and various fields of AI in biomedical literature

Table 2 shows the extracted data. As described in the Methods section, the first level of the MeSH hierarchy classification is used together with ‘ biological ontologies’ - even though ‘ biological ontologies’ falls under the MeSH hierarchy ‘ knowledge bases’ . This is further addressed in the discussion section.

The red and the blue numbers show the two areas ( NeurNet and MachL ) mostly cross-cited with all others. Sums of cross-citations and standard deviations (SD) are calculated from the vertical and horizontal numbers (nine data elements - see as examples the red and blue numbers) for each area. In the case of ‘ heuristics ’, most of the data elements are zero, that is why calculating a standard deviation is not relevant. The standard deviation of these number series indicates how evenly a certain field is connected to others. Obviously, a higher SD means less uniform distribution.

For further visual analysis, overlapping citations among various AI areas are shown as a network diagram in Figure 3 . Nodes represent AI areas by their MeSH designations. Edges represent the overlaps, the cross citations among the nodes. In the depicted network, the nodes are proportionally sized to the number of cited literature areas. The width and the style of the edges correspond to the overlap among them. Widths of edges grow with the magnitude of the overlap. This network visualization helps to see the interconnectedness between the areas and the role of Knowledge Representation in this interdisciplinary arena.

An external file that holds a picture, illustration, etc.
Object name is 10-1055-s-0039-1677899-ibalkanyi-3.jpg

AI fields citations in MEDLINE, viewed as a network, where nodes are the AI fields and edges are the cross-citations.

Step 2: Phrase frequency and collocation analysis of extracted abstracts

The above studied citation data cover over eighty thousand citations. A further, in-depth look a limited corpus containing the abstracts of the most relevant first thousand papers was established. This corpus had 246,308 total words, of which 21,842 are unique word forms. A simple phrase frequency analysis 8 shows that the following five AI fields occur among the most frequent terms in the corpus:

These frequencies show the most researched areas of AI, however they do not shed light to their interaction with the specific aspect of language and meaning, classically discussed as `knowledge representation’. The collocation of AI fields was measured in the same corpus as the phrase frequency , both the left and right window spans were set to the maximum of 20 terms distance. In an earlier paper 10 , authors realized that “... the central role of the term “concept” has been gradually abandoned ….”. The notion of ‘ concept ’ was a term central to what was called the field of research in ‘ knowledge representation’ . Therefore, in order to analyze the current corpus on AI, in addition to the notion of ‘knowledge representation’, the notions ‘language’ & ‘meaning’ were also brought to the collocation study. For the four most studied areas, collocation data found in the corpus are shown in Table 4 .

5 Discussion

Principal findings.

Figure 1 shows that over time the various fields related to medical AI follow a cascading and explicitly escalating evolution. ‘ Expert systems’ studied in the eighties were followed by ‘ computer neural networks’ being in the lead in the nineties and the beginning of the twenty first century. This was followed by even more research focusing on ‘ robotics’ and currently on ‘ machine learning’ . At the same time, research goes on steadily in the other depicted fields. The cascade character might show us how new fields, or new names for old fields, take on and might also incorporate the results of previous areas. However, it is not trivial to see if ‘ machine learning’ will also take on the “cube root” function characteristics of other research fields, levelling out over time. Table 2 and Figure 3 show that although the research in medical AI has branched to a broad spectrum of fields, they are well interconnected. At the same time the interconnectedness varies greatly. ‘ Computer heuristics’ and ‘ biological ontologies’ are somewhat less interconnected to other fields, ‘ machine learning’ and ‘ computer-based neural networks’ are the most interconnected fields with all others. The term “ knowledge representation ” in the MeSH thesaurus itself is not part of an AI field, but is used in three entry terms for AI: Knowledge Representation (Computer) Knowledge Representations (Computer), Representation, Knowledge (Computer). Table 3 shows that the four areas ‘ robotic’ `, ‘ fuzzy logic’ , ‘ neural networks ’, and ‘ machine learning ’ seem to be by far the most mentioned researched areas, while ‘ expert systems ’, although above the limit of 50 citations, scores well below. Table 4 tells us that ‘ fuzzy logic’ seems to be the most collocated notion to the world of ‘ knowledge representation’ , ‘ meaning’, and ‘ language’ . This shows some advantage of the fuzzy approach to represent and to interpret medical knowledge. ‘ Neural networks’ and ‘ machine learning’ are also used in the conceptual neighborhood of knowledge representation. At the same time ‘robotics’ , while an important area in AI, seems to be somewhat isolated from the KR world. These results from text mining show that the various AI fields are well interconnected. It is interesting to see that the lowest standard deviation (SD) of cross citations to different areas occurs for our historically central concept ‘ knowledge representation ’. The relatively lowest SD shows that KR is the most “evenly” referred ‘notion’ till today. This finding provides a quantitative indicator suggesting that studying KR was (and is) at the origin of the wide spreading and branching fields of AI research. We will briefly highlight three interactions.

Interaction between Knowledge Representation and Robotics

Knowledge representation plays a role in robotics, for example for categorizing emotions 11 , learning cognitive robots to count 12 , representing and formalizing knowledge about care 13 . These examples show how knowledge representation can be an integral part of improving the functioning of robots. It apparently is yet too early to exploit the cognitive capacities of robots to contribute to knowledge representation, as no literature was found on this topic.

Interaction between Knowledge Representation and Machine Learning

Interaction between knowledge representation and machine learning is yet limited, but needed. An early acknowledgement of this need, specifically for diagnostic image interpretation, is found in 14 . Already in 1988, it was stated that “Diagnostic image interpretation with learning capability demands a full model of the human expert’s competence, including a considerable variety of knowledge representation schemes and inference strategies, coordinated by a meta-process controller.” A recent approach is to combine graph data (represented in Resource Description Framework and Ontology Web Language) with neural networks to generate embeddings of nodes 15 . This combination results in embeddings that contain both explicit and implicit information. Machine learning can contribute to knowledge representation, e.g., by abstract feature selection, which has been applied for automated phenotyping in 16 . Finally, we notice that natural language processing is among the domains to which machine learning and knowledge representation are applied. For example, MedTAS/P combines these three areas, as described in 17 .

Interaction between Knowledge Representation and Fuzzy Logic

Not surprisingly, most of these overlapping studies focus on the fuzzy nature of our limited knowledge in explaining and understanding particular diseases (e.g. Economou et al. 18 ) in cardiology or in the field of oncology (see D’Aquin et al. 2004 19 ). However, interesting studies compare the “fuzzy” thinking with different approaches, where the “fuzziness” seems to be a connecting notion between the worlds of algorithmic and other approaches interpreting medical data, e.g. Douali et al., in 2014 20 , on fuzzy cognitive maps and Bayesian networks, and Kwiatkowska et al., in 2007 21 , on creating prediction rules using typicality measures. Another typical area for overlapping studies is the high level interpretation of medical knowledge, e.g., Bellamy, in 1997 22 , on “Medical diagnosis, diagnostic spaces, and fuzzy systems” and the work of Boegl et al., in 2004 23 , on knowledge acquisition in a fuzzy knowledge representation framework. Summing up this interaction of these two fields is quite broad and covers many different areas of medical information science.

Limitations

Various widely divergent approaches involving, among others, fuzzy set theory 24 , Bayesian networks 25 , and artificial neural networks 26 27 have been applied to intelligent computing systems in healthcare. Papers concerning AI in the medical domain appear in many literature collections and research events, e.g., events by IEEE - Institute of Electrical and Electronics Engineers, AAAI - Conference on Artificial Intelligence, MLDM - International Conference on Machine Learning and Data Mining, or Intelligent Systems Conference, which may not be indexed in MEDLINE. However, we consider MEDLINE itself as a large enough “sample” of medical AI research to represent the fields and their interplay, so that any limitations of using only MEDLINE will not impact the results.

As mentioned, we found over 80.000 papers that were used in the field interplay analysis. However the more detailed text mining of the corpus had to be limited to the first thousand “best match” papers because of the corpus size limitations of the analytic tools. Having about 250,000 total words and over 20,000 unique word forms, size seems adequate for getting meaningful results for the phrase frequency and the collocation analysis that followed.

For the phrase frequency study, we limited the analysis to phrases occurring at least 50 times. While the tool calculated all phrase frequencies, our opinion is that there has to be a limit in order to judge that a phrase occurs sufficiently frequently in the corpus to demonstrate interest in a research field. While the limit of 50 was chosen in a somewhat arbitrary way, we think there is not much difference among little-mentioned research fields, but there is a clear difference with the leading fields that occur several hundred times. The tables and figures presented in results give some insight in what encompasses AI in the health domain and how the various areas of AI research interact.

Definitions of AI in Related Literature

There is no common agreement on what exactly AI encompasses; thus AI can be considered a “fuzzy” term. In the field of medicine, MeSH provides a good basis for specifying the subdomains of (medical) AI. However, MeSH includes “ knowledge representation ” as an entry term for “ artificial intelligence ”, while “ knowledge bases ” is a subcategory of AI. Outside of the medical domain, attempts to define AI and its field have led to more philosophical answers. Larry Tesler, quoted in 28 , provides a definition that may not be helpful in itself, but does highlight the hype that periodically surrounds AI, stating that “Artificial Intelligence is whatever hasn’t been done yet”. The common aspect of AI is that of computers mimicking intelligent human behavior. Whereas this is sometimes simplified as “thinking machines”, this was demonstrated being an inadequate metaphor by Edsger Dijkstra’s quote “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” 29 .

6 Conclusion

The results of our analysis revealed that AI research in medicine occurs in a cascading and escalating way. While neural networks, robotics, and machine learning are the research areas with the largest number of indexed publications, they show the lowest relative interplay with other areas, whereas knowledge representation publications, having one of the smallest numbers of indexed publications, expose the highest interplay of around 45%. This supports the idea that the notion of knowledge representation might play both a historical and foundational role in the various areas, providing a common cognitive layer, a still needed context, even for domains such as machine learning , neural nets , fuzzy logic , and robotics .

1 Authors, chairing the IMIA WG 6, currently called “Language and Meaning in Biomedicine”, formerly “Medical Concept Representation” are continuing the tradition of this WG time to time reaching out for a cross-disciplinary overview with other fields of biomedical information science - in this case with AI. See our WG site for more details: https://imiawg6lamb.wordpress.com/ .

Data Science Introduction

  • What Is Data Science? A Beginner's Guide To Data Science
  • Data Science Tutorial – Learn Data Science from Scratch!
  • What are the Best Books for Data Science?
  • Top 15 Hot Artificial Intelligence Technologies
  • Top 8 Data Science Tools Everyone Should Know
  • Top 10 Data Analytics Tools You Need To Know In 2024
  • 5 Data Science Projects – Data Science Projects For Practice
  • Top 10 Data Science Applications
  • Who is a Data Scientist?
  • SQL For Data Science: One stop Solution for Beginners

Statistical Inference

  • All You Need To Know About Statistics And Probability
  • A Complete Guide To Math And Statistics For Data Science
  • Introduction To Markov Chains With Examples – Markov Chains With Python
  • What is Fuzzy Logic in AI and What are its Applications?
  • How To Implement Bayesian Networks In Python? – Bayesian Networks Explained With Examples
  • All You Need To Know About Principal Component Analysis (PCA)
  • Python for Data Science – How to Implement Python Libraries

Machine Learning

  • What is Machine Learning? Machine Learning For Beginners
  • Which is the Best Book for Machine Learning?
  • Mathematics for Machine Learning: All You Need to Know
  • Top 10 Machine Learning Frameworks You Need to Know
  • Predicting the Outbreak of COVID-19 Pandemic using Machine Learning
  • Introduction To Machine Learning: All You Need To Know About Machine Learning
  • Machine Learning Tutorial for Beginners
  • Top 10 Applications of Machine Learning in Daily Life
  • Machine Learning Algorithms
  • How To Implement Find-S Algorithm In Machine Learning?
  • What is Cross-Validation in Machine Learning and how to implement it?

All You Need To Know About The Breadth First Search Algorithm

Supervised learning.

  • What is Supervised Learning and its different types?
  • Linear Regression Algorithm from Scratch
  • How To Implement Linear Regression for Machine Learning?
  • Introduction to Classification Algorithms
  • How To Implement Classification In Machine Learning?
  • Naive Bayes Classifier: Learning Naive Bayes with Python
  • A Comprehensive Guide To Naive Bayes In R
  • A Complete Guide On Decision Tree Algorithm
  • Decision Tree: How To Create A Perfect Decision Tree?
  • What is Overfitting In Machine Learning And How To Avoid It?

How To Use Regularization in Machine Learning?

Unsupervised learning.

  • What is Unsupervised Learning and How does it Work?
  • K-means Clustering Algorithm: Know How It Works
  • KNN Algorithm: A Practical Implementation Of KNN Algorithm In R
  • Implementing K-means Clustering on the Crime Dataset
  • K-Nearest Neighbors Algorithm Using Python
  • Apriori Algorithm : Know How to Find Frequent Itemsets
  • What Are GANs? How and why you should use them!
  • Q Learning: All you need to know about Reinforcement Learning

Miscellaneous

  • Data Science vs Machine Learning - What's The Difference?
  • AI vs Machine Learning vs Deep Learning
  • Data Analyst vs Data Engineer vs Data Scientist: Skills, Responsibilities, Salary
  • Data Science vs Big Data vs Data Analytics

Career Opportunities

  • Data Science Career Opportunities: Your Guide To Unlocking Top Data Scientist Jobs
  • Data Scientist Skills – What Does It Take To Become A Data Scientist?
  • 10 Skills To Master For Becoming A Data Scientist
  • Data Scientist Resume Sample – How To Build An Impressive Data Scientist Resume
  • Data Scientist Salary – How Much Does A Data Scientist Earn?

Machine Learning Engineer vs Data Scientist : Career Comparision

  • How To Become A Machine Learning Engineer? – Learning Path

Interview Questions

  • Top Machine Learning Interview Questions You Must Prepare In 2024
  • Top Data Science Interview Questions For Budding Data Scientists In 2024
  • 120+ Data Science Interview Questions And Answers for 2024

Artificial Intelligence

What is knowledge representation in ai techniques you need to know.

the meaning of knowledge representation

Human beings are good at understanding, reasoning and interpreting knowledge. And using this knowledge, they are able to perform various actions in the real world. But how do machines perform the same? In this article, we will learn about Knowledge Representation in AI and how it helps the machines perform reasoning and interpretation using Artificial Intelligence in the following sequence:

What is Knowledge Representation?

Different types of knowledge.

  • Cycle of Knowledge Representation
  • What is the relation between Knowledge & Intelligence?
  • Techniques of Knowledge Representation

Representation Requirements

  • Approaches to Knowledge Representation with Example

Knowledge Representation in AI describes the representation of knowledge. Basically, it is a study of how the beliefs, intentions , and judgments of an intelligent agent can be expressed suitably for automated reasoning. One of the primary purposes of Knowledge Representation includes modeling intelligent behavior for an agent.

Knowledge Representation and Reasoning ( KR, KRR ) represents information from the real world for a computer to understand and then utilize this knowledge to solve complex real-life problems like communicating with human beings in natural language. Knowledge representation in AI is not just about storing data in a database, it allows a machine to learn from that knowledge and behave intelligently like a human being.

The different kinds of knowledge that need to be represented in AI include:

  • Performance
  • Meta-Knowledge
  • Knowledge-base

Now that you know about Knowledge representation in AI, let’s move on and know about the different types of Knowledge.

There are 5 types of Knowledge such as:

Declarative Knowledge – It includes concepts, facts, and objects and expressed in a declarative sentence.

Structural Knowledge – It is a basic problem-solving knowledge that describes the relationship between concepts and objects.

Procedural Knowledge – This is responsible for knowing how to do something and includes rules, strategies, procedures, etc.

Meta Knowledge – Meta Knowledge defines knowledge about other types of Knowledge.

Heuristic Knowledge – This represents some expert knowledge in the field or subject.

These are the important types of Knowledge Representation in AI. Now, let’s have a look at the cycle of knowledge representation and how it works.

Cycle of Knowledge Representation in AI

Artificial Intelligent Systems usually consist of various components to display their intelligent behavior. Some of these components include:

  • Knowledge Representation & Reasoning

Here is an example to show the different components of the system and how it works:

The above diagram shows the interaction of an AI system with the real world and the components involved in showing intelligence.

  • The Perception component retrieves data or information from the environment. with the help of this component, you can retrieve data from the environment, find out the source of noises and check if the AI was damaged by anything. Also, it defines how to respond when any sense has been detected.
  • Then, there is the Learning Component that learns from the captured data by the perception component. The goal is to build computers that can be taught instead of programming them. Learning focuses on the process of self-improvement. In order to learn new things, the system requires knowledge acquisition, inference, acquisition of heuristics, faster searches, etc.
  • The main component in the cycle is Knowledge Representation and Reasoning which shows the human-like intelligence in the machines. Knowledge representation is all about understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build intelligent behavior from the top-down and focus on what an agent needs to know in order to behave intelligently. Also, it defines how automated reasoning procedures can make this knowledge available as needed.
  • The Planning and Execution components depend on the analysis of knowledge representation and reasoning. Here, planning includes giving an initial state, finding their preconditions and effects, and a sequence of actions to achieve a state in which a particular goal holds. Now once the planning is completed, the final stage is the execution of the entire process.

So, these are the different components of the cycle of Knowledge Representation in AI. Now, let’s understand the relationship between knowledge and intelligence.

Top 10 Trending Technologies to Learn in 2024 | Edureka

What is the relation between knowledge & intelligence.

In the real world, knowledge plays a vital role in intelligence as well as creating artificial intelligence . It demonstrates the intelligent behavior in AI agents or systems . It is possible for an agent or system to act accurately on some input only when it has the knowledge or experience about the input.

Let’s take an example to understand the relationship:

In this example, there is one decision-maker whose actions are justified by sensing the environment and using knowledge. But, if we remove the knowledge part here, it will not be able to display any intelligent behavior.

Now that you know the relationship between knowledge and intelligence, let’s move on to the techniques of Knowledge Representation in AI.

Techniques of Knowledge Representation in AI

There are four techniques of representing knowledge such as:

Now, let’s discuss these techniques in detail.

Logical Representation 

Logical representation is a language with some definite rules which deal with propositions and has no ambiguity in representation. It represents a conclusion based on various conditions and lays down some important communication rules . Also, it consists of precisely defined syntax and semantics which supports the sound inference. Each sentence can be translated into logics using syntax and semantics.

Advantages:

  • Logical representation helps to perform logical reasoning.
  • This representation is the basis for the programming languages.

Disadvantages:

  • Logical representations have some restrictions and are challenging to work with.
  • This technique may not be very natural, and inference may not be very efficient.

Semantic Network Representation

Semantic networks work as an alternative of predicate logic for knowledge representation. In Semantic networks, you can represent your knowledge in the form of graphical networks. This network consists of nodes representing objects and arcs which describe the relationship between those objects. Also, it categorizes the object in different forms and links those objects.

This representation consist of two types of relations:

  • IS-A relation (Inheritance)
  • Kind-of-relation
  • Semantic networks are a natural representation of knowledge.
  • Also, it conveys meaning in a transparent manner.
  • These networks are simple and easy to understand.
  • Semantic networks take more computational time at runtime.
  • Also, these are inadequate as they do not have any equivalent quantifiers.
  • These networks are not intelligent and depend on the creator of the system.

Frame Representation

A frame is a record like structure that consists of a collection of attributes and values to describe an entity in the world. These are the AI data structure that divides knowledge into substructures by representing stereotypes situations. Basically, it consists of a collection of slots and slot values of any type and size. Slots have names and values which are called facets.

  • It makes the programming easier by grouping the related data.
  • Frame representation is easy to understand and visualize.
  • It is very easy to add slots for new attributes and relations.
  • Also, it is easy to include default data and search for missing values.
  • In frame system inference, the mechanism cannot be easily processed.
  • The inference mechanism cannot be smoothly proceeded by frame representation.
  • It has a very generalized approach.

Production Rules

In production rules, agent checks for the condition and if the condition exists then production rule fires and corresponding action is carried out. The condition part of the rule determines which rule may be applied to a problem. Whereas, the action part carries out the associated problem-solving steps. This complete process is called a recognize-act cycle.

The production rules system consists of three main parts:

  • The set of production rules
  • Working Memory
  • The recognize-act-cycle

The production rules are expressed in natural language.

The production rules are highly modular and can be easily removed or modified.

It does not exhibit any learning capabilities and does not store the result of the problem for future uses.

During the execution of the program, many rules may be active. Thus, rule-based production systems are inefficient.

So, these were the important techniques for Knowledge Representation in AI. Now, let’s have a look at the requirements for these representations.

A good knowledge representation system must have properties such as:

Representational Accuracy: It should represent all kinds of required knowledge.

Inferential Adequacy : It should be able to manipulate the representational structures to produce new knowledge corresponding to the existing structure.

Inferential Efficiency : The ability to direct the inferential knowledge mechanism into the most productive directions by storing appropriate guides.

Acquisitional efficiency : The ability to acquire new knowledge easily using automatic methods.

Now, let’s have a look at some of the approaches to Knowledge Representation in AI along with different examples.

Approaches to Knowledge Representation in AI

There are different approaches to knowledge representation such as:

1. Simple Relational Knowledge

It is the simplest way of storing facts which uses the relational method. Here, all the facts about a set of the object are set out systematically in columns. Also, this approach of knowledge representation is famous in database systems where the relationship between different entities is represented. Thus, there is little opportunity for inference.

This is an example of representing simple relational knowledge.

2. Inheritable Knowledge

In the inheritable knowledge approach, all data must be stored into a hierarchy of classes and should be arranged in a generalized form or a hierarchal manner. Also, this approach contains inheritable knowledge which shows a relation between instance and class, and it is called instance relation. In this approach, objects and values are represented in Boxed nodes.

3. Inferential Knowledge

The inferential knowledge approach represents knowledge in the form of formal logic . Thus, it can be used to derive more facts. Also, it guarantees correctness.

Statement 1 : John is a cricketer.

Statement 2 : All cricketers are athletes.

Then it can be represented as;

Cricketer(John) ∀x = Cricketer (x) ———-> Athelete (x)s

These were some of the approaches to knowledge representation in AI along with examples. With this, we have come to the end of our article. I hope you understood what is Knowledge Representation in AI and its different types.

Learn more about this trending OpenAI chatbot concept and implementation join the ChatGPT training .

Also, check out the   AI and Deep Learning with TensorFlow   Course is curated by industry professionals as per the industry requirements & demands. Thus, You will master the concepts such as SoftMax function, Autoencoder Neural Networks, Restricted Boltzmann Machine (RBM) and work with libraries like Keras & TFLearn. The course has been specially curated by industry experts with real-time case studies. 

Got a question for us? Please mention it in the comments section of “Knowledge Representation in AI” and we will get back to you.

Recommended videos for you

Introduction to mahout, recommended blogs for you, autoencoders tutorial : a beginner’s guide to autoencoders, top 10 machine learning tools you need to know about, how to use chatgpt for interview preparation in 2024, machine learning engineer salary : how much does an ml engineer earn, top 10 benefits of artificial intelligence, restricted boltzmann machine tutorial – introduction to deep learning concepts, openai playground vs chatgpt, best laptop for machine learning in 2024, artificial intelligence – what it is and its use cases, tensorflow tutorial – deep learning using tensorflow, top 10 new trending technologies to learn in 2024, artificial intelligence pros and cons: everything you need to know ai, pattern recognition : how is it different from machine learning, how to get started with chatgpt, what is cognitive ai is it the future, top 12 artificial intelligence (ai) tools you need to know, join the discussion cancel reply, trending courses in artificial intelligence, human-computer interaction (hci) for ai syste ....

  • 2k Enrolled Learners
  • Weekend/Weekday

ChatGPT Complete Course: Beginners to Advance ...

  • 13k Enrolled Learners

Prompt Engineering Course

  • 3k Enrolled Learners

Artificial Intelligence Certification Course

  • 15k Enrolled Learners

Reinforcement Learning

Graphical models certification training, machine learning with mahout certification tr ....

  • 10k Enrolled Learners

Browse Categories

Subscribe to our newsletter, and get personalized recommendations..

Already have an account? Sign in .

20,00,000 learners love us! Get personalised resources in your inbox.

At least 1 upper-case and 1 lower-case letter

Minimum 8 characters and Maximum 50 characters

We have recieved your contact details.

You will recieve an email from us shortly.

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

2. Inheritable knowledge:

  • In the inheritable knowledge approach, all data must be stored into a hierarchy of classes.
  • All classes should be arranged in a generalized form or a hierarchal manner.
  • In this approach, we apply inheritance property.
  • Elements inherit values from other members of a class.
  • This approach contains inheritable knowledge which shows a relation between instance and class, and it is called instance relation.
  • Every individual frame can represent the collection of attributes and its value.
  • In this approach, objects and values are represented in Boxed nodes.
  • We use Arrows which point from objects to their values.

Knowledge Representation in Artificial intelligence

3. Inferential knowledge:

  • Inferential knowledge approach represents knowledge in the form of formal logics.
  • This approach can be used to derive more facts.
  • It guaranteed correctness.
  • Marcus is a man
  • All men are mortal Then it can represent as; man(Marcus) ∀x = man (x) ----------> mortal (x)s

4. Procedural knowledge:

  • Procedural knowledge approach uses small programs and codes which describes how to do specific things, and how to proceed.
  • In this approach, one important rule is used which is If-Then rule .
  • In this knowledge, we can use various coding languages such as LISP language and Prolog language .
  • We can easily represent heuristic or domain-specific knowledge using this approach.
  • But it is not necessary that we can represent all cases in this approach.

Requirements for knowledge Representation system:

A good knowledge representation system must possess the following properties.

  • 1. Representational Accuracy: KR system should have the ability to represent all kind of required knowledge.
  • 2. Inferential Adequacy: KR system should have ability to manipulate the representational structures to produce new knowledge corresponding to existing structure.
  • 3. Inferential Efficiency: The ability to direct the inferential knowledge mechanism into the most productive directions by storing appropriate guides.
  • 4. Acquisitional efficiency- The ability to acquire the new knowledge easily using automatic methods.

Youtube

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

Foundations of knowledge representation and reasoning

A guide to this volume

  • First Online: 01 January 2005

Cite this chapter

Book cover

  • Gerhard Lakemeyer 1 &
  • Bernhard Nebel 2  

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 810))

401 Accesses

2 Citations

  • Knowledge Representation
  • Belief Revision
  • Inference Algorithm
  • Reasoning Task
  • Default Logic

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Proceedings of the 8th National Conference of the American Association for Artificial Intelligence , Boston, MA, August 1990. MIT Press.

Google Scholar  

E. W. Adams. The Logic of Conditionals . Reidel, Dordrecht, Holland, 1975.

Carlos E. Alchourrón, Peter Gärdenfors, and David Makinson. On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic , 50(2):510–530, June 1985.

J. A. Allen, R. Fikes, and E. Sandewall, editors. Principles of Knowledge Representation and Reasoning: Proceedings of the 2nd International Conference , Cambridge, MA, April 1991. Morgan Kaufmann.

Jürgen Allgayer and Enrico Franconi. Collective entities and relations in concept languages. In Lakemeyer and Nebel [1994].

Franz Baader and Bernhard Hollunder. Computing extensions of terminological default theories. In Lakemeyer and Nebel [1994].

C. Bettini. A formalization of interval-based temporal subsumption in first order logic. In Lakemeyer and Nebel [1994].

Craig Boutilier. Normative, subjunctive and autoepistemic defaults. In Lakemeyer and Nebel [1994].

Ronald J. Brachman and Hector J. Levesque. The tractability of subsumption in frame-based description languages. In Proceedings of the 4th National Conference of the American Association for Artificial Intelligence , pages 34–37, Austin, TX, 1984.

Ronald J. Brachman and Hector J. Levesque, editors. Readings in Knowledge Representation . Morgan Kaufmann, Los Altos, CA, 1985.

R. Brachman, H. J. Levesque, and R. Reiter, editors. Principles of Knowledge Representation and Reasoning: Proceedings of the 1st International Conference , Toronto, ON, May 1989. Morgan Kaufmann.

Ronald J. Brachman. The future of knowledge representation. In AAAI-90 [1990], pages 1082–1092.

Gerhard Brewka. Nonmonotonic Reasoning: Logical Foundations of Commonsense . Cambridge University Press, Cambridge, UK, 1991.

Luca Console and Daniele Dupré. Abductive reasoning with abstraction axioms. In Lakemeyer and Nebel [1994].

Francesco M. Donini, Maurizio Lenzerini, Daniele Nardi, and Werner Nutt. Tractable concept languages. In Proceedings of the 12th International Joint Conference on Artificial Intelligence , pages 458–465, Sydney, Australia, August 1991. Morgan Kaufmann.

[Donini et al. , 1994] Francesco Donini, Maurizio Lenzerini, Daniele Nardi, Andrea Schaerf, and Werner Nutt. Queries, rules and definitions as epistemic sentences in concept languages. In Lakemeyer and Nebel [1994].

Jon Doyle and Ramesh S. Patil. Two theses of knowledge representation: Language restrictions, taxonomic classification, and the utility of representation services. Artificial Intelligence , 48(3):261–298, April 1991.

Michael R. Garey and David S. Johnson. Computers and Intractability-A Guide to the Theory of NP-Completeness . Freeman, San Francisco, CA, 1979.

Georg Gottlob. Complexity results for nonmonotonic logics. Journal for Logic and Computation , 2(3), 1992.

Georg Gottlob. The power of beliefs or translating default logic into standard autoepistemic logic. In Lakemeyer and Nebel [1994].

Russell Greiner and Dale Schuurmans. Learning an optimally accurate representation system. In Lakemeyer and Nebel [1994].

Patrick J. Hayes. In defence of logic. In Proceedings of the 5th International Joint Conference on Artificial Intelligence , pages 559–565, Cambridge, MA, August 1977.

Ulrich Junker and Kurt Konolige. Computing extensions of autoepistemic and default logics with a truth maintenance system. In AAAI-90 [1990], pages 278–283.

A. C. Kakas. Default reasoning via negation as failure. In Lakemeyer and Nebel [1994].

Jürgen Kalinski. Weak autoepistemic reasoning and well-founded semantics. In Lakemeyer and Nebel [1994].

Hirofumi Katsuno and Alberto O. Mendelzon. On the difference between updating a knowledge base and revising it. In Allen et al. [1991], pages 387–394.

Henry Kautz and Bart Selman. Forming concepts for fast inference. In Lakemeyer and Nebel [1994].

Gerhard Lakemeyer and Bernhard Nebel, editors. Foundations of Knowledge Representation and Reasoning . Springer-Verlag, Berlin, Heidelberg, New York, 1994.

Hector J. Levesque and Ronald J. Brachman. Expressiveness and tractability in knowledge representation and reasoning. Computational Intelligence , 3:78–93, 1987.

Hector J. Levesque. Logic and the complexity of reasoning. Journal of Philosophical Logic , 17:355–389, 1988.

David K. Lewis. Counterfactuals . Harvard University Press, Cambridge, MA, 1973.

Yuen Q. Lin. A common-sense theory of time. In Lakemeyer and Nebel [1994].

John McCarthy. Programs with common sense. In M. Minsky, editor, Semantic Information Processing , pages 403–418. MIT Press, Cambridge, MA, 1968.

John McCarthy. Circumscription-a form of non-monotonic reasoning. Artificial Intelligence , 13(1–2):27–39, 1980.

Drew V. McDermott. Tarskian semantics, or no notation without denotation! Cognitive Science , 2(3):277–282, July 1978.

Robert C. Moore. Semantical considerations on nonmonotonic logic. Artificial Intelligence , 25:75–94, 1985.

Karen Myers and Kurt Konolige. Reasoning with analogical representations. In Lakemeyer and Nebel [1994].

[Nebel et al. , 1992] B. Nebel, W. Swartout, and C. Rich, editors. Principles of Knowledge Representation and Reasoning: Proceedings of the 3rd International Conference , Cambridge, MA, October 1992. Morgan Kaufmann.

Bernhard Nebel. Belief revision and default reasoning: Syntax-based approaches. In Allen et al. [1991], pages 417–428.

Wolfgang Nejdl and Markus Banagl. Asking about possibilities — revision and update semantics for subjunctive queries (extended report). In Lakemeyer and Nebel [1994].

Ilkka Niemelä and Jussi Rintanen. On the impact of stratification on the complexity of nonmonotonic reasoning. In Lakemeyer and Nebel [1994].

Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference . Morgan Kaufmann, San Mateo, CA, 1988.

Raymond Reiter. A logic for default reasoning. Artificial Intelligence , 13(1):81–132, April 1980.

Raymond Reiter. Nonmonotonic reasoning. Annual Review of Computing Sciences , 2, 1987.

Manfred Schmidt-Schauß and Gert Smolka. Attributive concept descriptions with complements. Artificial Intelligence , 48:1–26, 1991.

C. Schwind and V. Risch. A tableau-based characterisation for default logic. In R. Kruse and P. Siegel, editors, Symbolic and Quantitative Approaches to Uncertainty, Proceedings of the European Conference ECSQAU , pages 310–317, Marseilles, France, 1991. Springer-Verlag.

Bart Selman and Henry Kautz. Knowledge compilation using Horn approximations. In Proceedings of the 9th National Conference of the American Association for Artificial Intelligence , pages 904–909, Anaheim, CA, July 1991. MIT Press.

Yoav Shoham and Steve B. Cousins. Logics of mental attitudes in AI — a very preliminary survey. In Lakemeyer and Nebel [1994].

Brian C. Smith. Reflection and Semantics in a Procedural Language . PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 1982. Report MIT/LCS/TR-272.

Emil Weydert. Hyperrational conditionals — monotonic reasoning about nested default conditionals. In Lakemeyer and Nebel [1994].

Cees Witteveen and Catholijn M. Jonker. Revision by expansion in logic programs. In Lakemeyer and Nebel [1994].

Download references

Author information

Authors and affiliations.

Institute of Computer Science, University of Bonn, Römerstr. 164, D-53117, Bonn, Germany

Gerhard Lakemeyer

Department of Computer Science, University of Ulm, D-89069, Ulm, Germany

Bernhard Nebel

You can also search for this author in PubMed   Google Scholar

Editor information

Rights and permissions.

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this chapter

Lakemeyer, G., Nebel, B. (1994). Foundations of knowledge representation and reasoning. In: Lakemeyer, G., Nebel, B. (eds) Foundations of Knowledge Representation and Reasoning. Lecture Notes in Computer Science, vol 810. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58107-3_1

Download citation

DOI : https://doi.org/10.1007/3-540-58107-3_1

Published : 07 June 2005

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-540-58107-9

Online ISBN : 978-3-540-48453-0

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Help | Advanced Search

Computer Science > Computation and Language

Title: rematch: robust and efficient matching of local knowledge graphs to improve structural and semantic similarity.

Abstract: Knowledge graphs play a pivotal role in various applications, such as question-answering and fact-checking. Abstract Meaning Representation (AMR) represents text as knowledge graphs. Evaluating the quality of these graphs involves matching them structurally to each other and semantically to the source text. Existing AMR metrics are inefficient and struggle to capture semantic similarity. We also lack a systematic evaluation benchmark for assessing structural similarity between AMR graphs. To overcome these limitations, we introduce a novel AMR similarity metric, rematch, alongside a new evaluation for structural similarity called RARE. Among state-of-the-art metrics, rematch ranks second in structural similarity; and first in semantic similarity by 1--5 percentage points on the STS-B and SICK-R benchmarks. Rematch is also five times faster than the next most efficient metric.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. 16 Powerful Symbols of Knowledge and Their Meanings

    the meaning of knowledge representation

  2. Knowledge Representation in Artificial Intelligence

    the meaning of knowledge representation

  3. PPT

    the meaning of knowledge representation

  4. PPT

    the meaning of knowledge representation

  5. PPT

    the meaning of knowledge representation

  6. AI Techniques of Knowledge Representation

    the meaning of knowledge representation

VIDEO

  1. تمثيل المعرفة ، knowledge presentation

  2. The Reality of Meaning: Knowledge, Value, and Complexity

  3. M01 (Introduction) History of Knowledge Representation on the Web

  4. Lecture 4 part 1 Intelligent system (Knowledge representation)

  5. Knowledge Representation and Logical Reasoning in Data

  6. المحاضرة السادسة طرق تمثيل المعرفةKnowledge Representation part 1

COMMENTS

  1. PDF What Is a Knowledge Representation?

    Role 1: A Knowledge Representation Is a Surrogate. Any intelligent entity that wants to reason about its world encounters an important, inescapable fact: Reasoning is a process that goes on internally, but most things it wants to reason about exist only externally. A pro-gram (or person) engaged in planning the assembly of a bicycle, for ...

  2. Knowledge Representation

    Knowledge representation is an active area of research in artificial intelligence (Brachman and Bector 2004).It often refers to the complex and time-consuming technical process performed by knowledge engineers (Knowledge Engineering) when acquiring domain knowledge for use in knowledge-based systems.The question of how to represent human knowledge is an old problem, and knowledge ...

  3. Knowledge representation and reasoning

    Knowledge representation and reasoning (KRR, KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language.Knowledge representation incorporates findings from psychology about how humans solve problems ...

  4. PDF 1. What Is Knowledge Representation?

    knowledge representation that have developed over the years. It should also serve as an introduction to this collection by ... disambiguating the exact meaning of a natural language surface form. It is from this notion of LF that computational linguists specify the inferential machinery which provides natural language "front-ends" to database ...

  5. Knowledge Representation

    Definition. Knowledge representation is a key concept in cognitive science and psychology. To understand this theoretical term one has to distinguish between "knowledge" and its "representation.". Intelligent behaviors of a system, natural or artificial, are usually explained by referring to the system's knowledge.

  6. PDF Knowledge Representation

    Outline 1 Representation systems Categories and objects Frames Events and scripts Practical examples - Cyc - Semantic web Philipp Koehn Artificial Intelligence: Knowledge Representation 7 March 2024

  7. [PDF] What Is a Knowledge Representation?

    It is argued that keeping in mind all five of these roles that a representation plays provides a usefully broad perspective that sheds light on some longstanding disputes and can invigorate both research and practice in the field. Although knowledge representation is one of the central and, in some ways, most familiar concepts in AI, the most fundamental question about it -- What is it? -- has ...

  8. Knowledge Representation

    Knowledge Representation. Ernest Davis, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015. Abstract. In artificial intelligence, knowledge representation is the study of how the beliefs, intentions, and value judgments of an intelligent agent can be expressed in a transparent, symbolic notation suitable for automated reasoning.

  9. Knowledge Representation, Reasoning, and the Design of Intelligent Agents

    Knowledge representation and reasoning is the foundation of artificial intelligence, declarative programming, and the design of knowledge-intensive software systems capable of performing intelligent tasks. ... A New Definition of SLDNF-Resolution. Journal of Logic Programming, 28:177-190, 1994. Baader, Franz, Diego, Calvanese, Deborah L ...

  10. Knowledge representation and acquisition in the era of large language

    The first term, knowledge representation, focuses on the formal representation of the world that captures important properties as well as relationships between objects, events or concepts. Knowledge representation is concerned with formulating a symbolic language that is able to describe the world in a way that is both rich enough to capture ...

  11. Knowledge Representation

    Abstract. Theories in psychology make implicit or explicit assumptions about the way people store and use information. The choice of a format for knowledge representation is crucial, because it influences what processes are easy or hard for a system to accomplish. In this chapter, I define the concept of a representation.

  12. The Interplay of Knowledge Representation with Various Fields of

    The notion of ' concept ' was a term central to what was called the field of research in ' knowledge representation'. Therefore, in order to analyze the current corpus on AI, in addition to the notion of 'knowledge representation', the notions 'language' & 'meaning' were also brought to the collocation study.

  13. PDF Knowledge Representation and Reasoning

    - use symbolic knowledge representation and reasoning - But, they also use non-symbolic methods • Non-symbolic methods are covered in other courses (CS228, CS229, …) • This course would be better labeled as a course on Symbolic Representation and Reasoning - The non-symbolic representations are also knowledge representations

  14. Information, Knowledge, Representation

    Keywords. Practitioners of knowledge representation (KR) should have a shared working understanding of what the concepts of information, knowledge, and knowledge representation mean. That is the main thrust of this chapter. 1 As a symbolic species [ 1 ], we first used symbols as a way to convey the ideas of things.

  15. A Gentle Introduction to Knowledge Representation Learning

    Knowledge representation learning (KRL) mainly focus on the process of learning knowledge graph embeddings, while keeping the semantic similarities. This has proven extremely useful, as feature inputs, for a wide variety of prediction and graph analysis tasks [1, 2, 3]. We will further elaborate on specific applications in future articles.

  16. What is Knowledge Representation in AI?

    Knowledge Representation in AI describes the representation of knowledge. Basically, it is a study of how the beliefs, intentions, and judgments of an intelligent agent can be expressed suitably for automated reasoning. One of the primary purposes of Knowledge Representation includes modeling intelligent behavior for an agent.

  17. Knowledge Representation in Artificial Intelligence

    Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which concerned with AI agents thinking and how thinking contributes to intelligent behavior of agents. It is responsible for representing information about the real world so that a computer can understand and can utilize this knowledge to solve the complex ...

  18. Knowledge Representation

    Knowledge representation is fundamental to the study of mind. All theories of psychological processing are rooted in assumptions about how information is stored. These assumptions, in turn, influence the explanatory power of theories. This book fills a gap in the existing literature by providing an overview of types of knowledge representation ...

  19. What is Knowledge Representation?

    Knowledge Representation: The field of knowledge representation involves considering artificial intelligence and how it presents some sort of knowledge, usually regarding a closed system. IT professionals and others may monitor and evaluate an artificial intelligence system to get a better idea of its simulation of human knowledge, or its role ...

  20. PDF Foundations of knowledge representation and reasoning

    Knowledge representation (KR) is the area of Artificial Intelligence that deals with the problem of representing, maintaining, and manipulating knowledge ... (formal) meaning of the representation can be specified without reference to how the knowledge is applied procedurally, implying some sort of logical methodology ...

  21. Knowledge representation.

    At the core of all theories in psychology is a set of assumptions about the information people use to carry out the task being modeled and about the way that information is stored. The format in which information is stored and used in psychological processing is called knowledge representation. This chapter explores what it means for something to be a representation and what types of ...

  22. Knowledge flows and graphic knowledge representations

    The meaning of knowledge representation in knowledge flows In this section, the meanings of knowledge representations in empowering and facilitating knowledge flows are identified and discussed. As a frame of reference, the SECI model proposed by Nonaka (1994) will be adopted, which specifies the types of knowledge flows taking place within an ...

  23. Rematch: Robust and Efficient Matching of Local Knowledge Graphs to

    Knowledge graphs play a pivotal role in various applications, such as question-answering and fact-checking. Abstract Meaning Representation (AMR) represents text as knowledge graphs. Evaluating the quality of these graphs involves matching them structurally to each other and semantically to the source text. Existing AMR metrics are inefficient and struggle to capture semantic similarity. We ...