SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Word Meaning

Word meaning has played a somewhat marginal role in early contemporary philosophy of language, which was primarily concerned with the structural features of sentence meaning and showed less interest in the nature of the word-level input to compositional processes. Nowadays, it is well-established that the study of word meaning is crucial to the inquiry into the fundamental properties of human language. This entry provides an overview of the way issues related to word meaning have been explored in analytic philosophy and a summary of relevant research on the subject in neighboring scientific domains. Though the main focus will be on philosophical problems, contributions from linguistics, psychology, neuroscience and artificial intelligence will also be considered, since research on word meaning is highly interdisciplinary.

1.1 The Notion of Word

1.2 theories of word meaning, 2.1 classical traditions, 2.2 historical-philological semantics, 3.1 early contemporary views, 3.2 grounding and lexical competence, 3.3 the externalist turn, 3.4 internalism, 3.5 contextualism, minimalism, and the lexicon, 4.1 structuralist semantics, 4.2 generativist semantics, 4.3 decompositional approaches, 4.4 relational approaches, 5.1 cognitive linguistics, 5.2 psycholinguistics, 5.3 neurolinguistics, other internet resources, related entries.

The notions of word and word meaning are problematic to pin down, and this is reflected in the difficulties one encounters in defining the basic terminology of lexical semantics. In part, this depends on the fact that the term ‘word’ itself is highly polysemous (see, e.g., Matthews 1991; Booij 2007; Lieber 2010). For example, in ordinary parlance ‘word’ is ambiguous between a type-level reading (as in “ Color and colour are spellings of the same word”), an occurrence-level reading (as in “there are thirteen words in the tongue-twister How much wood would a woodchuck chuck if a woodchuck could chuck wood? ”), and a token-level reading (as in “John erased the last two words on the blackboard”). Before proceeding further, let us then elucidate the notion of word in more detail ( Section 1.1 ), and lay out the key questions that will guide our discussion of word meaning in the rest of the entry ( Section 1.2 ).

We can distinguish two fundamental approaches to the notion of word. On one side, we have linguistic approaches, which characterize the notion of word by reflecting on its explanatory role in linguistic research (for a survey on explanation in linguistics, see Egré 2015). These approaches often end up splitting the notion of word into a number of more fine-grained and theoretically manageable notions, but still tend to regard ‘word’ as a term that zeroes in on a scientifically respectable concept (e.g., Di Sciullo & Williams 1987). For example, words are the primary locus of stress and tone assignment, the basic domain of morphological conditions on affixation, clitization, compounding, and the theme of phonological and morphological processes of assimilation, vowel shift, metathesis, and reduplication (Bromberger 2011).

On the other side, we have metaphysical approaches, which attempt to pin down the notion of word by inquiring into the metaphysical nature of words. These approaches typically deal with such questions as “what are words?”, “how should words be individuated?”, and “on what conditions two utterances count as utterances of the same word?”. For example, Kaplan (1990, 2011) has proposed to replace the orthodox type-token account of the relation between words and word tokens with a “common currency” view on which words relate to their tokens as continuants relate to stages in four-dimensionalist metaphysics (see the entries on types and tokens and identity over time ). Other contributions to this debate can be found, a.o., in McCulloch (1991), Cappelen (1999), Alward (2005), Hawthorne & Lepore (2011), Sainsbury & Tye (2012), Gasparri (2016), and Irmak (forthcoming).

For the purposes of this entry, we can rely on the following stipulation. Every natural language has a lexicon organized into lexical entries , which contain information about word types or lexemes . These are the smallest linguistic expressions that are conventionally associated with a non-compositional meaning and can be articulated in isolation to convey semantic content. Word types relate to word tokens and occurrences just like phonemes relate to phones in phonological theory. To understand the parallelism, think of the variations in the place of articulation of the phoneme /n/, which is pronounced as the voiced bilabial nasal [m] in “ten bags” and as the voiced velar nasal [ŋ] in “ten gates”. Just as phonemes are abstract representations of sets of phones (each defining one way the phoneme can be instantiated in speech), lexemes can be defined as abstract representations of sets of words (each defining one way the lexeme can be instantiated in sentences). Thus, ‘do’, ‘does’, ‘done’ and ‘doing’ are morphologically and graphically marked realizations of the same abstract word type do . To wrap everything into a single formula, we can say that the lexical entries listed in a lexicon set the parameters defining the instantiation potential of word types in sentences, utterances and inscriptions (cf. Murphy 2010). In what follows, unless otherwise indicated, our talk of “word meaning” should be understood as talk of “word type meaning” or “lexeme meaning”, in the sense we just illustrated.

As with general theories of meaning (see the entry on theories of meaning ), two kinds of theory of word meaning can be distinguished. The first kind, which we can label a semantic theory of word meaning, is a theory interested in clarifying what meaning-determining information is encoded by the words of a natural language. A framework establishing that the word ‘bachelor’ encodes the lexical concept adult unmarried male would be an example of a semantic theory of word meaning. The second kind, which we can label a foundational theory of word meaning, is a theory interested in elucidating the facts in virtue of which words come to have the semantic properties they have for their users. A framework investigating the dynamics of semantic change and social coordination in virtue of which the word ‘bachelor’ is assigned the function of expressing the lexical concept adult unmarried male would be an example of a foundational theory of word meaning. Likewise, it would be the job of a foundational theory of word meaning to determine whether words have the semantic properties they have in virtue of social conventions, or whether social conventions do not provide explanatory purchase on the facts that ground word meaning (see the entry on convention ).

Obviously, the endorsement of a given semantic theory is bound to place important constraints on the claims one might propose about the foundational attributes of word meaning, and vice versa . Semantic and foundational concerns are often interdependent, and it is difficult to find theories of word meaning which are either purely semantic or purely foundational. According to Ludlow (2014), for example, the fact that word meaning is systematically underdetermined (a semantic matter) can be explained in part by looking at the processes of linguistic negotiation whereby discourse partners converge on the assignment of shared meanings to the words of their language (a foundational matter). However, semantic and foundational theories remain in principle different and designed to answer partly non-overlapping sets of questions.

Our focus in this entry will be on semantic theories of word meaning, i.e., on theories that try to provide an answer to such questions as “what is the nature of word meaning?”, “what do we know when we know the meaning of a word?”, and “what (kind of) information must a speaker associate to the words of a language in order to be a competent user of its lexicon?”. However, we will engage in foundational considerations whenever necessary to clarify how a given framework addresses issues in the domain of a semantic theory of word meaning.

2. Historical Background

The study of word meaning became a mature academic enterprise in the 19 th century, with the birth of historical-philological semantics ( Section 2.2 ). Yet, matters related to word meaning had been the subject of much debate in earlier times. We can distinguish three major classical approaches to word meaning: speculative etymology, rhetoric, and classical lexicography (Meier-Oeser 2011; Geeraerts 2013). We describe them briefly in Section 2.1 .

The prototypical example of speculative etymology is perhaps the Cratylus (383a-d), where Plato presents his well-known naturalist thesis about word meaning. According to Plato, natural kind terms express the essence of the objects they denote and words are appropriate to their referents insofar as they implicitly describe the properties of their referents (see the entry on Plato’s Cratylus ). For example, the Greek word ‘ anthrôpos ’ can be broken down into anathrôn ha opôpe , which translates as “one who reflects on what he has seen”: the word used to denote humans reflects their being the only animal species which possesses the combination of vision and intelligence. For speculative etymology, there is a natural or non-arbitrary relation between words and their meaning, and the task of the theorist is to make this relation explicit through an analysis of the descriptive, often phonoiconic mechanisms underlying the genesis of words. More on speculative etymology in Malkiel (1993), Fumaroli (1999), and Del Bello (2007).

The primary aim of the rhetorical tradition was the study of figures of speech. Some of these concern sentence-level variables such as the linear order of the words occurring in a sentence (e.g., parallelism, climax, anastrophe); others are lexical in nature and depend on using words in a way not intended by their normal or literal meaning (e.g., metaphor, metonymy, synecdoche). Although originated for stylistic and literary purposes, the identification of regular patterns in the figurative use of words initiated by the rhetorical tradition provided a first organized framework to investigate the semantic flexibility of words, and laid the groundwork for further inquiry into our ability to use lexical expressions beyond the boundaries of their literal meaning. More on the rhetorical tradition in Kennedy (1994), Herrick (2004), and Toye (2013).

Finally, classical lexicography and the practice of writing dictionaries played an important role in systematizing the descriptive data on which later inquiry would rely to illuminate the relationship between words and their meaning. Putnam’s (1970) claim that it was the phenomenon of writing (and needing) dictionaries that gave rise to the idea of a semantic theory is probably an overstatement. But the inception of lexicography certainly had an impact on the development of modern theories of word meaning. The practice of separating dictionary entries via lemmatization and defining them through a combination of semantically simpler elements provided a stylistic and methodological paradigm for much subsequent research on lexical phenomena, such as decompositional theories of word meaning. More on classical lexicography in Béjoint (2000), Jackson (2002), and Hanks (2013).

Historical-philological semantics incorporated elements from all the above classical traditions and dominated the linguistic scene roughly from 1870 to 1930, with the work of scholars such as Michel Bréal, Hermann Paul, and Arsène Darmesteter (Gordon 1982). In particular, it absorbed from speculative etymology an interest in the conceptual mechanisms underlying the formation of word meaning, it acquired from rhetorical analysis a taxonomic toolkit for the classification of lexical phenomena, and it assimilated from lexicography and textual philology the empirical basis of descriptive data that subsequent theories of word meaning would have to account for (Geeraerts 2013).

On the methodological side, the key features of the approach to word meaning introduced by historical-philological semantics can be summarized as follows. First, it had a diachronic and pragmatic orientation. That is, it was primarily concerned with the historical evolution of word meaning rather than with word meaning statically understood, and attributed great importance to the contextual flexibility of word meaning. Witness Paul’s (1920 [1880]) distinction between usuelle Bedeutung and okkasionelle Bedeutung , or Bréal’s (1924 [1897]) account of polysemy as a byproduct of semantic change. Second, it looked at word meaning primarily as a psychological phenomenon. It assumed that the semantic properties of words should be defined in mentalistic terms (i.e., words signify “concepts” or “ideas” in a broad sense), and that the dynamics of sense modulation, extension, and contraction that underlie lexical change correspond to broader patterns of conceptual activity in the human mind. Interestingly, while the classical rhetorical tradition had conceived of tropes as marginal linguistic phenomena whose investigation, albeit important, was primarily motivated by stylistic concerns, for historical-philological semantics the psychological mechanisms underlying the production and the comprehension of figures of speech were part of the ordinary life of languages, and engines of the evolution of all aspects of lexical systems (Nerlich 1992).

The contribution made by historical-philological semantics to the study of word meaning had a long-lasting influence. First, with its emphasis on the principles of semantic change, historical-philological semantics was the first systematic framework to focus on the dynamic nature of word meaning, and established contextual flexibility as the primary explanandum for a theory of word meaning (Nerlich & Clarke 1996, 2007). This feature of historical-philological semantics is a clear precursor of the emphasis placed on context-sensitivity by many subsequent approaches to word meaning, both in philosophy (see Section 3 ) and in linguistics (see Section 4 ). Second, the psychologistic approach to word meaning fostered by historical philological-semantics added to the agenda of linguistic research the question of how word meaning relates to cognition at large. If word meaning is essentially a psychological phenomenon, what psychological categories should be used to characterize it? What is the dividing line separating the aspects of our mental life that constitute knowledge of word meaning from those that do not? As we shall see, this question will constitute a central concern for cognitive theories of word meaning (see Section 5 ).

3. Philosophy of Language

In this section we shall review some semantic and metasemantic theories in analytic philosophy that bear on how lexical meaning should be conceived and described. We shall follow a roughly chronological order. Some of these theories, such as Carnap’s theory of meaning postulates and Putnam’s theory of stereotypes, have a strong focus on lexical meaning, whereas others, such as Montague semantics, regard it as a side issue. However, such negative views form an equally integral part of the philosophical debate on word meaning.

By taking the connection of thoughts and truth as the basic issue of semantics and regarding sentences as “the proper means of expression for a thought” (Frege 1979a [1897]), Frege paved the way for the 20 th century priority of sentential meaning over lexical meaning: the semantic properties of subsentential expressions such as individual words were regarded as derivative, and identified with their contribution to sentential meaning. Sentential meaning was in turn identified with truth conditions, most explicitly in Wittgenstein’s Tractatus logico-philosophicus (1922). However, Frege never lost interest in the “building blocks of thoughts” (Frege 1979b [1914]), i.e., in the semantic properties of subsentential expressions. Indeed, his theory of sense and reference for names and predicates may be counted as the inaugural contribution to lexical semantics within the analytic tradition (see the entry on Gottlob Frege ). It should be noted that Frege did not attribute semantic properties to lexical units as such, but to what he regarded as a sentence’s logical constituents: e.g., not to the word ‘dog’ but to the predicate ‘is a dog’. In later work this distinction was obliterated and Frege’s semantic notions came to be applied to lexical units.

Possibly because of lack of clarity affecting the notion of sense, and surely because of Russell’s (1905) authoritative criticism of Fregean semantics, word meaning disappeared from the philosophical scene during the 1920s and 1930s. In Wittgenstein’s Tractatus the “real” lexical units, i.e., the constituents of a completely analyzed sentence, are just names, whose semantic properties are exhausted by their reference. In Tarski’s (1933) work on formal languages, which was taken as definitional of the very field of semantics for some time, lexical units are semantically categorized into different classes (individual constants, predicative constants, functional constants) depending on the logical type of their reference, i.e., according to whether they designate individuals in a domain of interpretation, classes of individuals (or of n -tuples of individuals), or functions defined over the domain. However, Tarski made no attempt nor felt any need to represent semantic differences among expressions belonging to the same logical type (e.g., between one-place predicates such as ‘dog’ and ‘run’, or between two-place predicates such as ‘love’ and ‘left of’). See the entry on Alfred Tarski .

Quine (1943) and Church (1951) rehabilitated Frege’s distinction of sense and reference. Non-designating words such as ‘Pegasus’ cannot be meaningless: it is precisely the meaning of ‘Pegasus’ that allows speakers to establish that the word lacks reference. Moreover, as Frege (1892) had argued, true factual identities such as “Morning Star = Evening Star” do not state synonymies; if they did, any competent speaker of the language would be aware of their truth. Along these lines, Carnap (1947) proposed a new formulation of the sense/reference dichotomy, which was translated into the distinction between intension and extension . The notion of intension was intended to be an explicatum of Frege’s “obscure” notion of sense: two expressions have the same intension if and only if they have the same extension in every possible world or, in Carnap’s terminology, in every state description (i.e., in every maximal consistent set of atomic sentences and negations of atomic sentences). Thus, ‘round’ and ‘spherical’ have the same intension (i.e., they express the same function from possible worlds to extensions) because they apply to the same objects in every possible world. Carnap later suggested that intensions could be regarded as the content of lexical semantic competence: to know the meaning of a word is to know its intension, “the general conditions which an object must fulfill in order to be denoted by [that] word” (Carnap 1955). However, such general conditions were not spelled out by Carnap (1947). Consequently, his system did not account, any more than Tarski’s, for semantic differences and relations among words belonging to the same semantic category: there were possible worlds in which the same individual a could be both a married man and a bachelor, as no constraints were placed on either word’s intension. One consequence, as Quine (1951) pointed out, was that Carnap’s system, which was supposed to single out analytic truths as true in every possible world, “Bachelors are unmarried”—intuitively, a paradigmatic analytic truth—turned out to be synthetic rather than analytic.

To remedy what he agreed was an unsatisfactory feature of his system, Carnap (1952) introduced meaning postulates , i.e., stipulations on the relations among the extensions of lexical items. For example, the meaning postulate

  • (MP) \(\forall x (\mbox{bachelor}(x) \supset \mathord{\sim}\mbox{married} (x))\)

stipulates that any individual that is in the extension of ‘bachelor’ is not in the extension of ‘married’. Meaning postulates can be seen either as restrictions on possible worlds or as relativizing analyticity to possible worlds. On the former option we shall say that “If Paul is a bachelor then Paul is unmarried” holds in every admissible possible world, while on the latter we shall say that it holds in every possible world in which (MP) holds . Carnap regarded the two options as equivalent; nowadays, the former is usually preferred. Carnap (1952) also thought that meaning postulates expressed the semanticist’s “intentions” with respect to the meanings of the descriptive constants, which may or may not reflect linguistic usage; again, today postulates are usually understood as expressing semantic relations (synonymy, analytic entailment, etc.) among lexical items as currently used by competent speakers.

In the late 1960s and early 1970s, Montague (1974) and other philosophers and linguists (Kaplan, Kamp, Partee, and D. Lewis among others) set out to apply to the analysis of natural language the notions and techniques that had been introduced by Tarski and Carnap and further developed in Kripke’s possible worlds semantics (see the entry on Montague semantics ). Montague semantics can be represented as aiming to capture the inferential structure of a natural language: every inference that a competent speaker would regard as valid should be derivable in the theory. Some such inferences depend for their validity on syntactic structure and on the logical properties of logical words, like the inference from “Every man is mortal and Socrates is a man” to “Socrates is mortal”. Other inferences depend on properties of non-logical words that are usually regarded as semantic, like the inference from “Kim is pregnant” to “Kim is not a man”. In Montague semantics, such inferences are taken care of by supplementing the theory with suitable Carnapian meaning postulates. Yet, some followers of Montague regarded such additions as spurious: the aims of semantics, they said, should be distinguished from those of lexicography. The description of the meaning of non-logical words requires considerable world knowledge: for example, the inference from “Kim is pregnant” to “Kim is not a man” is based on a “biological” rather than on a “logical” generalization. Hence, we should not expect a semantic theory to furnish an account of how any two expressions belonging to the same syntactic category differ in meaning (Thomason 1974). From such a viewpoint, Montague semantics would not differ significantly from Tarskian semantics in its account of lexical meaning. But not all later work within Montague’s program shared such a skepticism about representing aspects of lexical meaning within a semantic theory, using either componential analysis (Dowty 1979) or meaning postulates (Chierchia & McConnell-Ginet 2000).

For those who believe that meaning postulates can exhaust lexical meaning, the issue arises of how to choose them, i.e., of how—and whether—to delimit the set of meaning-relevant truths with respect to the set of all true statements in which a given word occurs. As we just saw, Carnap himself thought that the choice could only be the expression of the semanticist’s intentions. However, we seem to share intuitions of analyticity , i.e., we seem to regard some, but not all sentences of a natural language as true by virtue of the meaning of the occurring words. Such intuitions are taken to reflect objective semantic properties of the language, that the semanticist should describe rather than impose at will. Quine (1951) did not challenge the existence of such intuitions, but he argued that they could not be cashed out in the form of a scientifically respectable criterion separating analytic truths (“Bachelors are unmarried”) from synthetic truths (“Aldo’s uncle is a bachelor”), whose truth does not depend on meaning alone. Though Quine’s arguments were often criticized (for recent criticisms, see Williamson 2007), and in spite of Chomsky’s constant endorsement of analyticity (see e.g. 2000: 47, 61–2), within philosophy the analytic/synthetic distinction was never fully vindicated (for an exception, see Russell 2008). Hence, it was widely believed that lexical meaning could not be adequately described by meaning postulates. Fodor and Lepore (1992) argued that this left semantics with two options: lexical meanings were either atomic (i.e., they could not be specified by descriptions involving other meanings) or they were holistic , i.e., only the set of all true sentences of the language could count as fixing them.

Neither alternative looked promising. Holism incurred in objections connected with the acquisition and the understanding of language: how could individual words be acquired by children, if grasping their meaning involved, somehow, semantic competence on the whole language? And how could individual sentences be understood if the information required to understand them exceeded the capacity of human working memory? (For an influential criticism of several varieties of holism, see Dummett 1991; for a review, Pagin 2006). Atomism, in turn, ran against strong intuitions of (at least some) relations among words being part of a language’s semantics: it is because of what ‘bachelor’ means that it doesn’t make sense to suppose we could discover that some bachelors are married. Fodor (1998) countered this objection by reinterpreting allegedly semantic relations as metaphysically necessary connections among extensions of words. However, sentences that are usually regarded as analytic, such as “Bachelors are unmarried”, are not easily seen as just metaphysically necessary truths like “Water is H 2 O”. If water is H 2 O, then its metaphysical essence consists in being H 2 O (whether we know it or not); but there is no such thing as a metaphysical essence that all bachelors share—an essence that could be hidden to us, even though we use the word ‘bachelor’ competently. On the contrary, on acquiring the word ‘bachelor’ we acquire the belief that bachelors are unmarried (Quine 1986); by contrast, many speakers that have ‘water’ in their lexical repertoire do not know that water is H 2 O. The difficulties of atomism and holism opened the way to vindications of molecularism (e.g., Perry 1994; Marconi 1997), the view on which only some relations among words matter for acquisition and understanding (see the entry on meaning holism ).

While mainstream formal semantics went with Carnap and Montague, supplementing the Tarskian apparatus with the possible worlds machinery and defining meanings as intensions, Davidson (1967, 1984) put forth an alternative suggestion. Tarski had shown how to provide a definition of the truth predicate for a (formal) language L : such a definition is materially adequate (i.e., it is a definition of truth , rather than of some other property of sentences of L ) if and only if it entails every biconditional of the form

  • (T) S is true in L iff p ,

where S is a sentence of L and p is its translation into the metalanguage of L in which the definition is formulated. Thus, Tarski’s account of truth presupposes that the semantics of both L and its metalanguage is fixed (otherwise it would be undetermined whether S translates into p ). On Tarski’s view, each biconditional of form (T) counts as a “partial definition” of the truth predicate for sentences of L (see the entry on Tarski’s truth definitions ). By contrast, Davidson suggested that if one took the notion of truth for granted, then T-biconditionals could be read as collectively constituting a theory of meaning for L , i.e., as stating truth conditions for the sentences of L . For example,

  • (W) “If the weather is bad then Sharon is sad” is true in English iff either the weather is not bad or Sharon is sad

states the truth conditions of the English sentence “If the weather is bad then Sharon is sad”. Of course, (W) is intelligible only if one understands the language in which it is phrased, including the predicate ‘true in English’. Davidson thought that the recursive machinery of Tarski’s definition of truth could be transferred to the suggested semantic reading, with extensions to take care of the forms of natural language composition that Tarski had neglected because they had no analogue in the formal languages he was dealing with. Unfortunately, few of such extensions were ever spelled out by Davidson or his followers. Moreover, it is difficult to see how, giving up possible worlds and intensions in favor of a purely extensional theory, the Davidsonian program could account for the semantics of propositional attitude ascriptions of the form “A believes (hopes, imagines, etc.) that p ”.

Construed as theorems of a semantic theory, T-biconditionals were often accused of being uninformative (Putnam 1975; Dummett 1976): to understand them, one has to already possess the information they are supposed to provide. This is particularly striking in the case of lexical axioms such as the following:

  • (V1) Val( x , ‘man’) iff x is a man;
  • (V2) Val(\(\langle x,y\rangle\), ‘knows’) iff x knows y .

(To be read, respectively, as “the predicate ‘man’ applies to x if and only if x is a man” and “the predicate ‘know’ applies to the pair \(\langle x, y\rangle\) if and only if x knows y ”). Here it is apparent that in order to understand (V1) one must know what ‘man’ means, which is just the information that (V1) is supposed to convey (as the theory, being purely extensional, identifies meaning with reference). Some Davidsonians, though admitting that statements such as (V1) and (V2) are in a sense “uninformative”, insist that what (V1) and (V2) state is no less “substantive” (Larson & Segal 1995). To prove their point, they appeal to non-homophonic versions of lexical axioms, i.e., to the axioms of a semantic theory for a language that does not coincide with the (meta)language in which the theory itself is phrased. Such would be, e.g.,

  • (V3) Val ( x , ‘man’) si et seulement si x est un homme.

(V3), they argue, is clearly substantive, yet what it says is exactly what (V1) says, namely, that the word ‘man’ applies to a certain category of objects. Therefore, if (V3) is substantive, so is (V1). But this is beside the point. The issue is not whether (V1) expresses a proposition; it clearly does, and it is, in this sense, “substantive”. But what is relevant here is informative power: to one who understands the metalanguage of (V3), i.e., French, (V3) may communicate new information, whereas there is no circumstance in which (V1) would communicate new information to one who understands English.

In the mid-1970s, Dummett raised the issue of the proper place of lexical meaning in a semantic theory. If the job of a theory of meaning is to make the content of semantic competence explicit—so that one could acquire semantic competence in a language L by learning an adequate theory of meaning for L —then the theory ought to reflect a competent speaker’s knowledge of circumstances in which she would assert a sentence of L , such as “The horse is in the barn”, as distinct from circumstances in which she would assert “The cat is on the mat”. This, in turn, appears to require that the theory yields explicit information about the use of ‘horse’, ‘barn’, etc., or, in other words, that it includes information which goes beyond the logical type of lexical units. Dummett identified such information with a word’s Fregean sense. However, he did not specify the format in which word senses should be expressed in a semantic theory, except for words that could be defined (e.g., ‘aunt’ = “sister of a parent”): in such cases, the definiens specifies what a speaker must understand in order to understand the word (Dummett 1991). But of course, not all words are of this kind. For other words, the theory should specify what it is for a speaker to know them, though we are not told how exactly this should be done. Similarly, Grandy (1974) pointed out that by identifying the meaning of a word such as ‘wise’ as a function from possible worlds to the sets of wise people in those worlds, Montague semantics only specifies a formal structure and eludes the question of whether there is some possible description for the functions which are claimed to be the meanings of words. Lacking such descriptions, possible worlds semantics is not really a theory of meaning but a theory of logical form or logical validity. Again, aside from suggesting that “one would like the functions to be given in terms of computation procedures, in some sense”, Grandy had little to say about the form of lexical descriptions.

In a similar vein, Partee (1981) argued that Montague semantics, like every compositional or structural semantics, does not uniquely fix the intensional interpretation of words. The addition of meaning postulates does rule out some interpretations (e.g., interpretations on which the extension of ‘bachelor’ and the extension of ‘married’ may intersect in some possible world). However, it does not reduce them to the unique, “intended” or, in Montague’s words, “actual” interpretation (Montague 1974). Hence, standard model-theoretic semantics does not capture the whole content of a speaker’s semantic competence, but only its structural aspects. Fixing “the actual interpretation function” requires more than language-to-language connections as encoded by, e.g., meaning postulates: it requires some “language-to-world grounding ”. Arguments to the same effect were developed by Bonomi (1983) and Harnad (1990). In particular, Harnad had in mind the simulation of human semantic competence in artificial systems: he suggested that symbol grounding could be implemented, in part, by “feature detectors” picking out “invariant features of objects and event categories from their sensory projections” (for recent developments see, e.g., Steels & Hild 2012). Such a cognitively oriented conception of grounding differs from Partee’s Putnam-inspired view, on which the semantic grounding of lexical items depends on the speakers’ objective interactions with the external world in addition to their narrow psychological properties.

A resolutely cognitive approach characterizes Marconi’s (1997) account of lexical semantic competence. In his view, lexical competence has two aspects: an inferential aspect, underlying performances such as semantically based inference and the command of synonymy, hyponymy and other semantic relations; and a referential aspect, which is in charge of performances such as naming (e.g., calling a horse ‘horse’) and application (e.g., answering the question “Are there any spoons in the drawer?”). Language users typically possess both aspects of lexical competence, though in different degrees for different words: a zoologist’s inferential competence on ‘manatee’ is usually richer than a layman’s, though a layman who spent her life among manatees may be more competent, referentially, than a “bookish” scientist. However, the two aspects are independent of each another, and neuropsychological evidence appears to show that they can be dissociated: there are patients whose referential competence is impaired or lost while their inferential competence is intact, and vice versa (see Section 5.3 ). Being a theory of individual competence, Marconi’s account does not deal directly with lexical meanings in a public language: communication depends both on the uniformity of cognitive interactions with the external world and on communal norms concerning the use of language, together with speakers’ deferential attitude toward semantic authorities.

Since the early 1970s, views on lexical meaning were revolutionized by semantic externalism. Initially, externalism was limited to proper names and natural kind words such as ‘gold’ or ‘lemon’. In slightly different ways, both Kripke (1972) and Putnam (1970, 1975) argued that the reference of such words was not determined by any description that a competent speaker associated with the word; more generally, and contrary to what Frege may have thought, it was not determined by any cognitive content associated with it in a speaker’s mind (for arguments to that effect, see the entry on names ). Instead, reference is determined, at least in part, by objective (“causal”) relations between a speaker and the external world. For example, a speaker refers to Aristotle when she utters the sentence “Aristotle was a great warrior”—so that her assertion expresses a false proposition about Aristotle, not a true proposition about some great warrior she may “have in mind”—thanks to her connection with Aristotle himself. In this case, the connection is constituted by a historical chain of speakers going back to the initial users of the name ‘Aristotle’, or its Greek equivalent, in baptism-like circumstances. To belong to the chain, speakers (including present-day speakers) are not required to possess any precise knowledge of Aristotle’s life and deeds; they are, however, required to intend to use the name as it is used by the speakers they are picking up the name from, i.e., to refer to the individual those speakers intend to refer to.

In the case of most natural kind names, it may be argued, baptisms are hard to identify or even conjecture. In Putnam’s view, for such words reference is determined by speakers’ causal interaction with portions of matter or biological individuals in their environment: ‘water’, for example, refers to this liquid stuff, stuff that is normally found in our rivers, lakes, etc. The indexical component ( this liquid, our rivers) is crucial to reference determination: it wouldn’t do to identify the referent of ‘water’ by way of some description (“liquid, transparent, quenches thirst, boils at 100°C, etc.”), for something might fit the description yet fail to be water, as in Putnam’s (1973, 1975) famous Twin Earth thought experiment (see the entry on reference ). It might be remarked that, thanks to modern chemistry, we now possess a description that is sure to apply to water and only to water: “being H 2 O” (Millikan 2005). However, even if our chemistry were badly mistaken (as in principle it could turn out to be) and water were not, in fact, H 2 O, ‘water’ would still refer to whatever has the same nature as this liquid. Something belongs to the extension of ‘water’ if and only if it is the same substance as this liquid, which we identify—correctly, as we believe—as being H 2 O.

Let it be noted that in Putnam’s original proposal, reference determination is utterly independent of speakers’ cognition: ‘water’ on Twin Earth refers to XYZ (not to H 2 O) even though the difference between the two substances is cognitively inert, so that before chemistry was created nobody on either Earth or Twin Earth could have told them apart. However, the label ‘externalism’ has been occasionally used for weaker views: a semantic account may be regarded as externalist if it takes semantic content to depend in one way or another on relations a computational system bears to things outside itself (Rey 2005; Borg 2012), irrespective of whether such relations affect the system’s cognitive state. Weak externalism is hard to distinguish from forms of internalism on which a word’s reference is determined by information stored in a speaker’s cognitive system—information of which the speaker may or may not be aware (Evans 1982). Be that as it may, in what follows ‘externalism’ will be used to mean strong, or Putnamian, externalism.

Does externalism apply to other lexical categories besides proper names and natural kind words? Putnam (1975) extended it to artifactual words, claiming that ‘pencil’ would refer to pencils— those objects—even if they turned out not to fit the description by which we normally identify them (e.g., if they were discovered to be organisms, not artifacts). Schwartz (1978, 1980) pointed out, among many objections, that even in such a case we could make objects fitting the original description; we would then regard the pencil-like organisms as impostors, not as “genuine” pencils. Others sided with Putnam and the externalist account: for example, Kornblith (1980) pointed out that artifactual kinds from an ancient civilization could be re-baptized in total ignorance of their function. The new artifactual word would then refer to the kind those objects belong to independently of any beliefs about them, true or false. Against such externalist accounts, Thomasson (2007) argued that artifactual terms cannot refer to artifactual kinds independently of all beliefs and concepts about the nature of the kind, for the concept of the kind’s creator(s) is constitutive of the nature of the kind. Whether artifactual words are liable to an externalist account is still an open issue (for recent discussions see Marconi 2013; Bahr, Carrara & Jansen 2019; see also the entry on artifacts ), as is, more generally, the scope of application of externalist semantics.

There is another form of externalism that does apply to all or most words of a language: social externalism (Burge 1979), the view on which the meaning of a word as used by an individual speaker depends on the semantic standards of the linguistic community the speaker belongs to. In our community the word ‘arthritis’ refers to arthritis—an affliction of the joints—even when used by a speaker who believes that it can afflict the muscles as well and uses the word accordingly. If the community the speaker belongs to applied ‘arthritis’ to rheumatoids ailments in general, whether or not they afflict the joints, the same word form would not mean arthritis and would not refer to arthritis. Hence, a speaker’s mental contents, such as the meanings associated with the words she uses, depend on something external to her, namely the uses and the standards of use of the linguistic community she belongs to. Thus, social externalism eliminates the notion of idiolect: words only have the meanings conferred upon them by the linguistic community (“public” meanings); discounting radical incompetence, there is no such thing as individual semantic deviance, there are only false beliefs (for criticisms, see Bilgrami 1992, Marconi 1997; see also the entry on idiolects ).

Though both forms of externalism focus on reference, neither is a complete reduction of lexical meaning to reference. Both Putnam and Burge make it a necessary condition of semantic competence on a word that a speaker commands information that other semantic views would regard as part of the word’s sense. For example, if a speaker believes that manatees are a kind of household appliance, she would not count as competent on the word ‘manatee’, nor would she refer to manatees by using it (Putnam 1975; Burge 1993). Beyond that, it is not easy for externalists to provide a satisfactory account of lexical semantic competence, as they are committed to regarding speakers’ beliefs and abilities (e.g., recognitional abilities) as essentially irrelevant to reference determination, hence to meaning. Two main solutions have been proposed. Putnam (1970, 1975) suggested that a speaker’s semantic competence consists in her knowledge of stereotypes associated with words. A stereotype is an oversimplified theory of a word’s extension: the stereotype associated with ‘tiger’ describes tigers as cat-like, striped, carnivorous, fierce, living in the jungle, etc. Stereotypes are not meanings, as they do not determine reference in the right way: there are albino tigers and tigers that live in zoos. What the ‘tiger’-stereotype describes is (what the community takes to be) the typical tiger. Knowledge of stereotypes is necessary to be regarded as a competent speaker, and—one surmises—it can also be considered sufficient for the purposes of ordinary communication. Thus, Putnam’s account does provide some content for semantic competence, though it dissociates it from knowledge of meaning.

On an alternative view (Devitt 1983), competence on ‘tiger’ does not consist in entertaining propositional beliefs such as “tigers are striped”, but rather in being appropriately linked to a network of causal chains for ‘tiger’ involving other people’s abilities, groundings, and reference borrowings. In order to understand the English word ‘tiger’ and use it in a competent fashion, a subject must be able to combine ‘tiger’ appropriately with other words to form sentences, to have thoughts which those sentences express, and to ground these thoughts in tigers. Devitt’s account appears to make some room for a speaker’s ability to, e.g., recognize a tiger when she sees one; however, the respective weights of individual abilities (and beliefs) and objective grounding are not clearly specified. Suppose a speaker A belongs to a community C that is familiar with tigers; unfortunately, A has no knowledge of the typical appearance of a tiger and is unable to tell a tiger from a leopard. Should A be regarded as a competent user ‘tiger’ on account of her being “part of C ” and therefore linked to a network of causal chains for ‘tiger’?

Some philosophers (e.g., Loar 1981; McGinn 1982; Block 1986) objected to the reduction of lexical meaning to reference, or to non-psychological factors that are alleged to determine reference. In their view, there are two aspects of meaning (more generally, of content): the narrow aspect, that captures the intuition that ‘water’ has the same meaning in both Earthian and Twin-Earthian English, and the wide aspect, that captures the externalist intuition that ‘water’ picks out different substances in the two worlds. The wide notion is required to account for the difference in reference between English and Twin-English ‘water’; the narrow notion is needed, first and foremost, to account for the relation between a subject’s beliefs and her behavior. The idea is that how an object of reference is described (not just which object one refers to) can make a difference in determining behavior. Oedipus married Jocasta because he thought he was marrying the queen of Thebes, not his mother, though as a matter of fact Jocasta was his mother. This applies to words of all categories: someone may believe that water quenches thirst without believing that H 2 O does; Lois Lane believed that Superman was a superhero but she definitely did not believe the same of her colleague Clark Kent, so she behaved one way to the man she identified as Superman and another way to the man she identified as Clark Kent (though they were the same man). Theorists that countenance these two components of meaning and content usually identify the narrow aspect with the inferential or conceptual role of an expression e , i.e., with the aspect of e that contributes to determine the inferential relations between sentences containing an occurrence of e and other sentences. Crucially, the two aspects are independent: neither determines the other. The stress on the independence of the two factors also characterizes more recent versions of so-called “dual aspect” theories, such as Chalmers (1996, 2002).

While dual theorists agree with Putnam’s claim that some aspects of meaning are not “in the head”, others have opted for plain internalism. For example, Segal (2000) rejected the intuitions that are usually associated with the Twin-Earth cases by arguing that meaning (and content in general) “locally supervenes” on a subject’s intrinsic physical properties. But the most influential critic of externalism has undoubtedly been Chomsky (2000). First, he argued that much of the alleged support for externalism comes in fact from “intuitions” about words’ reference in this or that circumstance. But ‘reference’ (and the verb ‘refer’ as used by philosophers) is a technical term, not an ordinary word, hence we have no more intuitions about reference than we have about tensors or c-command. Second, if we look at how words such as ‘water’ are applied in ordinary circumstances, we find that speakers may call ‘water’ liquids that contain a smaller proportion of H 2 O than other liquids they do not call ‘water’ (e.g., tea): our use of ‘water’ does not appear to be governed by hypotheses about microstructure. According to Chomsky, it may well be that progress in the scientific study of the language faculty will allow us to understand in what respects one’s picture of the world is framed in terms of things selected and individuated by properties of the lexicon, or involves entities and relationships describable by the resources of the language faculty. Some semantic properties do appear to be integrated with other aspects of language. However, so-called “natural kind words” (which in fact have little to do with kinds in nature, Chomsky claims) may do little more than indicating “positions in belief systems”: studying them may be of some interest for “ethnoscience”, surely not for a science of language. Along similar lines, others have maintained that the genuine semantic properties of linguistic expressions should be regarded as part of syntax, and that they constrain but do not determine truth conditions (e.g., Pietroski 2005, 2010). Hence, the connection between meaning and truth conditions (and reference) may be significantly looser than assumed by many philosophers.

“Ordinary language” philosophers of the 1950s and 1960s regarded work in formal semantics as essentially irrelevant to issues of meaning in natural language. Following Austin and the later Wittgenstein, they identified meaning with use and were prone to consider the different patterns of use of individual expressions as originating different meanings of the word. Grice (1975) argued that such a proliferation of meanings could be avoided by distinguishing between what is asserted by a sentence (to be identified with its truth conditions) and what is communicated by it in a given context (or in every “normal” context). For example, consider the following exchange:

  • A: Will Kim be hungry at 11am?
  • B: Kim had breakfast.

Although B does not literally assert that Kim had breakfast on that particular day (see, however, Partee 1973), she does communicate as much. More precisely, A could infer the communicated content by noticing that the asserted sentence, taken literally (“Kim had breakfast at least once in her life”), would be less informative than required in the context: thus, it would violate one or more principles of conversation (“maxims”) whereas there is no reason to suppose that the speaker intended to opt out of conversational cooperation (see the entries on Paul Grice and pragmatics ). If the interlocutor assumes that the speaker intended him to infer the communicated content—i.e., that Kim had breakfast that morning , so presumably she would not be hungry at 11—cooperation is preserved. Such non-asserted content, called ‘implicature’, need not be an addition to the overtly asserted content: e.g., in irony asserted content is negated rather than expanded by the implicature (think of a speaker uttering “Paul is a fine friend” to implicate that Paul has wickedly betrayed her).

Grice’s theory of conversation and implicatures was interpreted by many (including Grice himself) as a convincing way of accounting for the variety of contextually specific communicative contents while preserving the uniqueness of a sentence’s “literal” meaning, which was identified with truth conditions and regarded as determined by syntax and the conventional meanings of the occurring words, as in formal semantics. The only semantic role context was allowed to play was in determining the content of indexical words (such as ‘I’, ‘now’, ‘here’, etc.) and the effect of context-sensitive structures (such as tense) on a sentence’s truth conditions. However, in about the same years Travis (1975) and Searle (1979, 1980) pointed out that the semantic relevance of context might be much more pervasive, if not universal: intuitively, the same sentence type could have very different truth conditions in different contexts, though no indexical expression or structure appeared to be involved. Take the sentence “There is milk in the fridge”: in the context of morning breakfast it will be considered true if there is a carton of milk in the fridge and false if there is a patch of milk on a tray in the fridge, whereas in the context of cleaning up the kitchen truth conditions are reversed. Examples can be multiplied indefinitely, as indefinitely many factors can turn out to be relevant to the truth or falsity of a sentence as uttered in a particular context. Such variety cannot be plausibly reduced to traditional polysemy such as the polysemy of ‘property’ (meaning quality or real estate), nor can it be described in terms of Gricean implicatures: implicatures are supposed not to affect a sentence’s truth conditions, whereas here it is precisely the sentence’s truth conditions that are seen as varying with context.

The traditionalist could object by challenging the contextualist’s intuitions about truth conditions. “There is milk in the fridge”, she could argue, is true if and only if there is a certain amount (a few molecules will do) of a certain organic substance in the relevant fridge (for versions of this objection, Cappelen & Lepore 2005). So the sentence is true both in the carton case and in the patch case; it would be false only if the fridge did not contain any amount of any kind of milk (whether cow milk or goat milk or elephant milk). The contextualist’s reply is that, in fact, neither the speaker nor the interpreter is aware of such alleged literal content (the point is challenged by Fodor 1983, Carston 2002); but “what is said” must be intuitively accessible to the conversational participants ( Availability Principle , Recanati 1989). If truth conditions are associated with what is said—as the traditionalist would agree they are—then in many cases a sentence’s literal content, if there is such a thing, does not determine a complete, evaluable proposition. For a genuine proposition to arise, a sentence type’s literal content (as determined by syntax and conventional word meaning) must be enriched or otherwise modified by primary pragmatic processes based on the speakers’ background knowledge relative to each particular context of use of the sentence. Such processes differ from Gricean implicature-generating processes in that they come into play at the sub-propositional level; moreover, they are not limited to saturation of indexicals but may include the replacement of a constituent with another. These tenets define contextualism (Recanati 1993; Bezuidenhout 2002; Carston 2002; relevance theory (Sperber & Wilson 1986) is in some respects a precursor of such views). Contextualists take different stands on nature of the semantic contribution made by words to sentences, though they typically agree that it is insufficient to fix truth conditions (Stojanovic 2008). See Del Pinal (2018) for an argument that radical contextualism (in particular, truth-conditional pragmatics) should instead commit to rich lexical items which, in certain conditions, do suffice to fix truth conditions.

Even if sentence types have no definite truth conditions, it does not follow that lexical types do not make definite or predictable contributions to the truth conditions of sentences (think of indexical words). It does follow, however, that conventional word meanings are not the final constituents of complete propositions (see Allot & Textor 2012). Does this imply that there are no such things as lexical meanings understood as features of a language? If so, how should we account for word acquisition and lexical competence in general? Recanati (2004) does not think that contextualism as such is committed to meaning eliminativism, the view on which words as types have no meaning; nevertheless, he regards it as defensible. Words could be said to have, rather than “meaning”, a semantic potential , defined as the collection of past uses of a word w on the basis of which similarities can be established between source situations (i.e., the circumstances in which a speaker has used w ) and target situations (i.e., candidate occasions of application of w ). It is natural to object that even admitting that long-term memory could encompass such an immense amount of information (think of the number of times ‘table’ or ‘woman’ are used by an average speaker in the course of her life), surely working memory could not review such information to make sense of new uses. On the other hand, if words were associated with “more abstract schemata corresponding to types of situations”, as Recanati suggests as a less radical alternative to meaning eliminativism, one wonders what the difference would be with respect to traditional accounts in terms of polysemy.

Other conceptions of “what is said” make more room for the semantic contribution of conventional word meanings. Bach (1994) agrees with contextualists that the linguistic meaning of words (plus syntax and after saturation) does not always determine complete, truth-evaluable propositions; however, he maintains that they do provide some minimal semantic information, a so-called ‘propositional radical’, that allows pragmatic processes to issue in one or more propositions. Bach identifies “what is said” with this minimal information. However, many have objected that minimal content is extremely hard to isolate (Recanati 2004; Stanley 2007). Suppose it is identified with the content that all the utterances of a sentence type share; unfortunately, no such content can be attributed to a sentence such as “Every bottle is in the fridge”, for there is no proposition that is stably asserted by every utterance of it (surely not the proposition that every bottle in the universe is in the fridge, which is never asserted). Stanley’s (2007) indexicalism rejects the notion of minimal proposition and any distinction between semantic content and communicated content: communicated content can be entirely captured by means of consciously accessible, linguistically controlled content (content that results from semantic value together with the provision of values to free variables in syntax, or semantic value together with the provision of arguments to functions from semantic types to propositions) together with general conversational norms. Accordingly, Stanley generalizes contextual saturation processes that are usually regarded as characteristic of indexicals, tense, and a few other structures; moreover, he requires that the relevant variables be linguistically encoded, either syntactically or lexically. It remains to be seen whether such solutions apply (in a non- ad hoc way) to all the examples of content modulation that have been presented in the literature.

Finally, minimalism (Borg 2004, 2012; Cappelen & Lepore 2005) is the view that appears (and intends) to be closest to the Frege-Montague tradition. The task of a semantic theory is said to be minimal in that it is supposed to account only for the literal meaning of sentences: context does not affect literal semantic content but “what the speaker says” as opposed to “what the sentence means” (Borg 2012). In this sense, semantics is not another name for the theory of meaning, because not all meaning-related properties are semantic properties (Borg 2004). Contrary to contextualism and Bach’s theory, minimalism holds that lexicon and syntax together determine complete truth-evaluable propositions. Indeed, this is definitional for lexical meaning: word meanings are the kind of things which, if one puts enough of them together in the right sort of way, then what one gets is propositional content (Borg 2012). Borg believes that, in order to be truth-evaluable, propositional contents must be “about the world”, and that this entails some form of semantic externalism. However, the identification of lexical meaning with reference makes it hard to account for semantic relations such as synonymy, analytic entailment or the difference between ambiguity and polysemy, and syntactically relevant properties: the difference between “John is easy to please” and “John is eager to please” cannot be explained by the fact that ‘easy’ means the property easy (see the entry on ambiguity ). To account for semantically based syntactic properties, words may come with “instructions” that are not, however, constitutive of a word’s meaning like meaning postulates (which Borg rejects), though awareness of them is part of a speaker’s competence. Once more, lexical semantic competence is divorced from grasp of word meaning. In conclusion, some information counts as lexical if it is either perceived as such in “firm, type-level lexical intuitions” or capable of affecting the word’s syntactic behavior. Borg concedes that even such an extended conception of lexical content will not capture, e.g., analytic entailments such as the relation between ‘bachelor’ and ‘unmarried’.

4. Linguistics

The emergence of modern linguistic theories of word meaning is usually placed at the transition from historical-philological semantics ( Section 2.2 ) to structuralist semantics, the linguistics movement started at the break of the 20 th century by Ferdinand de Saussure with his Cours de Linguistique Générale (1995 [1916]).

The advances introduced by the structuralist conception of word meaning are best appreciated by contrasting its basic assumptions with those of historical-philological semantics. Let us recall the three most important differences (Lepschy 1970; Matthews 2001).

  • Anti-psychologism . Structuralist semantics views language as a symbolic system whose properties and internal dynamics can be analyzed without taking into account their implementation in the mind/brain of language users. Just as the rules of chess can be stated and analyzed without making reference to the mental properties of chess players, so a theory of word meaning can, and should, proceed simply by examining the formal role played by words within the system of the language.
  • Anti-historicism . Since the primary explanandum of structuralist semantics is the role played by lexical expressions within structured linguistic systems, structuralist semantics privileges the synchronic description of word meaning. Diachronic accounts of word meaning are logically posterior to the analysis of the relational properties statically exemplified by words at different stages of the evolution of the language.
  • Anti-localism . Because the semantic properties of words depend on the relations they entertain with other expressions in the same lexical system, word meanings cannot be studied in isolation. This is both an epistemological and a foundational claim, i.e., a claim about how matters related to word meaning should be addressed in the context of a semantic theory of word meaning, and a claim about the dynamics whereby the elements of a system of signs acquire the meaning they have for their users.

The account of lexical phenomena popularized by structuralism gave rise to a variety of descriptive approaches to word meaning. We can group them in three categories (Lipka 1992; Murphy 2003; Geeraerts 2006).

  • Lexical Field Theory . Introduced by Trier (1931), it argues that word meaning should be studied by looking at the relations holding between words in the same lexical field. A lexical field is a set of semantically related words whose meanings are mutually interdependent and which together spell out the conceptual structure of a given domain of reality. Lexical Field Theory assumes that lexical fields are closed sets with no overlapping meanings or semantic gaps. Whenever a word undergoes a change in meaning (e.g., its range of application is extended or contracted), the whole arrangement of its lexical field is affected (Lehrer 1974).
  • Componential Analysis . Developed in the second half of the 1950s by European and American linguists (e.g., Pattier, Coseriu, Bloomfield, Nida), this framework argues that word meaning can be described on the basis of a finite set of conceptual building blocks called semantic components or features . For example, ‘man’ can be analyzed as [+ male ], [+ mature ], ‘woman’ as [− male ], [+ mature ], ‘child’ as [+/− male ] [− mature ] (Leech 1974).
  • Relational Semantics . Prominent in the work of linguists such as Lyons (1963), this approach shares with Lexical Field Theory the commitment to a style of analysis that privileges the description of lexical relations, but departs from it in two important respects. First, it postulates no direct correspondence between sets of related words and domains of reality, thereby dropping the assumption that the organization of lexical fields should be understood to reflect the organization of the non-linguistic world. Second, instead of deriving statements about the meaning relations entertained by a lexical item (e.g., synonymy, hyponymy) from an independent account of its meaning, for relational semantics word meanings are constituted by the set of semantic relations words participate in (Evens et al. 1980; Cruse 1986).

The componential current of structuralism was the first to produce an important innovation in theories of word meaning: Katzian semantics (Katz & Fodor 1963; Katz 1972, 1987). Katzian semantics combined componential analysis with a mentalistic conception of word meaning and developed a method for the description of lexical phenomena in the context of a formal grammar. The mentalistic component of Katzian semantics is twofold. First, word meanings are defined as aggregates of simpler conceptual features inherited from our general categorization abilities. Second, the proper subject matter of the theory is no longer identified with the “structure of the language” but, following Chomsky (1957, 1965), with speakers’ ability to competently interpret the words and sentences of their language. In Katzian semantics, word meanings are structured entities whose representations are called semantic markers . A semantic marker is a hierarchical tree with labeled nodes whose structure reproduces the structure of the represented meaning, and whose labels identify the word’s conceptual components. For example, the figure below illustrates the sense of ‘chase’ (simplified from Katz 1987).

a tree of the form [.((Activity)_{[NP,S]}) [.(Physical) [.(Movement) (Fast) [.((Direction of)_{[NP,VP,S]}) ((Toward Location of) _{[NP,VP,S]}) ] ] ] [.(Purpose) ((Catching) _{[NP,VP,S]}) ] ]

Katz (1987) claimed that this approach was superior in both transparency and richness to the analysis of word meaning that could be provided via meaning postulates. For example, in Katzian semantics the validation of conditionals such as \(\forall x\forall y (\textrm{chase}(x, y) \to \textrm{follow}(x,y))\) could be reduced to a matter of inspection: one had simply to check whether the semantic marker of ‘follow’ was a subtree of the semantic marker of ‘chase’. Furthermore, the method incorporated syntagmatic relations in the representation of word meanings (witness the grammatical tags ‘NP’, ‘VP’ and ‘S’ attached to the conceptual components above). Katzian semantics was favorably received by the Generative Semantics movement (Fodor 1977; Newmeyer 1980) and boosted an interest in the formal representation of word meaning that would dominate the linguistic scene for decades to come (Harris 1993). Nonetheless, it was eventually abandoned. As subsequent commentators noted, Katzian semantics suffered from three important drawbacks. First, the theory did not provide any clear model of how the complex conceptual information represented by semantic markers contributed to the truth conditions of sentences (Lewis 1972). Second, some aspects of word meaning that could be easily represented with meaning postulates could not be expressed through semantic markers, such as the symmetry and the transitivity of predicates (e.g., \(\forall x\forall y (\textrm{sibling}(x, y) \to \textrm{sibling}(y, x))\) or \(\forall x\forall y\forall z (\textrm{louder}(x, y) \mathbin{\&} \textrm{louder}(y, z) \to \textrm{louder}(x, z))\); see Dowty 1979). Third, Katz’s arguments for the view that word meanings are intrinsically structured turned out to be vulnerable to objections from proponents of atomistic views of word meaning (see, most notably, Fodor & Lepore 1992).

After Katzian semantics, the landscape of linguistic theories of word meaning bifurcated. On one side, we find a group of theories advancing the decompositional agenda established by Katz. On the other side, we find a group of theories fostering the relational approach originated by Lexical Field Theory and relational semantics. Following Geeraerts (2010), we will briefly characterize the following ones.

The basic idea of the Natural Semantic Metalanguage approach (henceforth, NSM; Wierzbicka 1972, 1996; Goddard & Wierzbicka 2002) is that word meaning is best described through the combination of a small set of elementary conceptual particles, known as semantic primes . Semantic primes are primitive (i.e., not decomposable into further conceptual parts), innate (i.e., not learned), and universal (i.e., explicitly lexicalized in all natural languages, whether in the form of a word, a morpheme, a phraseme, and so forth). According to NSM, the meaning of any word in any natural language can be defined by appropriately combining these fundamental conceptual particles. Wierzbicka (1996) proposed a catalogue of about 60 semantic primes, designed to analyze word meanings within so-called reductive paraphrases. For example, the reductive paraphrase for ‘top’ is a part of something; this part is above all the other parts of this something . NSM has produced interesting applications in comparative linguistics (Peeters 2006), language teaching (Goddard & Wierzbicka 2007), and lexical typology (Goddard 2012). However, the approach has been criticized on various grounds. First, it has been argued that the method followed by NSM in the identification of semantic primes is insufficiently clear (e.g., Matthewson 2003). Second, some have observed that reductive paraphrases are too vague to be considered adequate representations of word meanings, since they fail to account for fine-grained differences between semantically neighboring words. For example, the reductive paraphrase provided by Wierzbicka for ‘sad’ (i.e., x feels something; sometimes a person thinks something like this: something bad happened; if i didn’t know that it happened i would say: i don’t want it to happen; i don’t say this now because i know: i can’t do anything; because of this, this person feels something bad; x feels something like this ) seems to apply equally well to ‘unhappy’, ‘distressed’, ‘frustrated’, ‘upset’, and ‘annoyed’ (e.g., Aitchison 2012). Third, there is no consensus on what items should ultimately feature in the list of semantic primes available to reductive paraphrases: the content of the list is debated and varies considerably between versions of NSM. Fourth, some purported semantic primes appear to fail to comply with the universality requirement and are not explicitly lexicalized in all known languages (Bohnemeyer 2003; Von Fintel & Matthewson 2008). See Goddard (1998) for some replies and Riemer (2006) for further objections.

For NSM, word meanings can be exhaustively represented with a metalanguage appealing exclusively to the combination of primitive linguistic particles. Conceptual Semantics (Jackendoff 1983, 1990, 2002) proposes a more open-ended approach. According to Conceptual Semantics, word meanings are essentially an interface phenomenon between a specialized body of linguistic knowledge (e.g., morphosyntactic knowledge) and core non-linguistic cognition. Word meanings are thus modeled as hybrid semantic representations combining linguistic features (e.g., syntactic tags) and conceptual elements grounded in perceptual knowledge and motor schemas. For example, here is the semantic representation of ‘drink’ according to Jackendoff.

Syntactic tags represent the grammatical properties of the word under analysis, while the items in subscript are picked from a core set of perceptually grounded primitives (e.g., event, state, thing, path, place, property, amount ) which are assumed to be innate, cross-modal and universal categories of the human mind. The decompositional machinery of Conceptual Semantics has a number of attractive features. Most notably, its representations take into account grammatical class and word-level syntax, which are plausibly an integral aspect of our knowledge of the meaning of words. However, some of its claims about the interplay between language and conceptual structure appear more problematic. To begin with, it has been observed that speakers tend to use causative predicates (e.g., ‘drink’) and the paraphrases expressing their decompositional structure (e.g., “cause a liquid to go into someone or something’s mouth”) in different and sometimes non-interchangeable ways (e.g., Wolff 2003), which raises concerns about the hypothesis that decompositional analyses à la Jackendoff may be regarded as faithful representations of word meanings. In addition, Conceptual Semantics is somewhat unclear as to what exact method should be followed in the identification of the motor-perceptual primitives that can feed descriptions of word meanings (Pulman 2005). Finally, the restriction placed by Conceptual Semantics on the type of conceptual material that can inform definitions of word meaning (low-level primitives grounded in perceptual knowledge and motor schemas) appears to affect the explanatory power of the framework. For example, how can one account for the difference in meaning between ‘jog’ and ‘run’ without ut taking into account higher-level, arguably non-perceptual knowledge about the social characteristics of jogging, which typically implies a certain leisure setting, the intention to contribute to physical wellbeing, and so on? See Taylor (1996), Deane (1996).

The neat dividing line drawn between word meanings and general world knowledge by Conceptual Semantics does not tell us much about the dynamic interaction of the two in language use. The Two-Level Semantics of Bierwisch (1983a,b) and Lang (Bierwisch & Lang 1989; Lang 1993) aims to provide such a dynamic account. Two-Level Semantics views word meaning as the result of the interaction between two systems: semantic form (SF) and conceptual structure (CS). SF is a formalized representation of the basic features of a word. It contains grammatical information that specifies, e.g., the admissible syntactic distribution of the word, plus a set of variables and semantic parameters whose value is determined by the interaction with CS. By contrast, CS consists of language-independent systems of knowledge (including general world knowledge) that mediate between language and the world (Lang & Maienborn 2011). According to Two-Level Semantics, for example, polysemous words can express variable meanings by virtue of having a stable underspecified SF which can be flexibly manipulated by CS. By way of example, consider the word ‘university’, which can be read as referring either to an institution (as in “the university selected John’s application”) or to a building (as in “the university is located on the North side of the river”). Simplifying a bit, Two-Level Semantics explains the dynamics governing the selection of these readings as follows.

  • Because ‘university’ belongs to the category of words denoting objects primarily characterized by their purpose, the general lexical entry for ‘university’ is \(\lambda x [\textrm{purpose} [x w]]\).
  • Based on our knowledge that the primary purpose of universities is to provide advanced education, the SF of ‘university’ is specified as \(\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]\).
  • The alternative readings of ‘university’ are a function of the two ways CS can set the value of the variable x in its SF, such ways being \(\lambda x [\textrm{institution} [x] \mathbin{\&} \textrm{purpose} [x w]]\) and \(\lambda x [\textrm{building} [x] \mathbin{\&} \textrm{purpose} [x w]]\).

Two-Level Semantics shares Jackendoff’s and Wierzbicka’s commitment to a descriptive paradigm that anchors word meaning to a stable decompositional template, all the while avoiding the immediate complications arising from a restrictive characterization of the type of conceptual factors that can modulate such stable decompositional templates in contexts. But there are, once again, a few significant issues. A first problem is definitional accuracy: defining the SF of ‘university’ as \(\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]\) seems too loose to reflect the subtle differences in meaning among ‘university’ and related terms designating institutions for higher education, such as ‘college’ or ‘academy’. Furthermore, the apparatus of Two-Level Semantics relies heavily on lambda expressions, which, as some commentators have noted (e.g., Taylor 1994, 1995), appears ill-suited to represent the complex forms of world knowledge we often rely on to fix the meaning of highly polysemous words. See also Wunderlich (1991, 1993).

The Generative Lexicon theory (GL; Pustejovsky 1995) takes a different approach. Instead of explaining the contextual flexibility of word meaning by appealing to rich conceptual operations applied on semantically thin lexical entries, this approach postulates lexical entries rich in conceptual information and knowledge of worldly facts. According to classical GL, the informational resources encoded in the lexical entry for a typical word w consist of the following four levels.

  • A lexical typing structure , specifying the semantic type of w within the type system of the language;
  • An argument structure , representing the number and nature of the arguments supported by w ;
  • An event structure , defining the event type denoted by w (e.g., state, process, transition);
  • A qualia structure , specifying the predicative force of w .

In particular, qualia structure specifies the conceptual relations that speakers associate to the real-world referents of a word and impact on the way the word is used in the language (Pustejovsky 1998). For example, our knowledge that bread is something that is brought about through baking is considered a Quale of the word ‘bread’, and this knowledge is responsible for our understanding that, e.g., “fresh bread” means “bread which has been baked recently”. GL distinguishes four types of qualia:

  • constitutive : the relation between an object x and its constituent parts;
  • formal : the basic ontological category of x ;
  • telic : the purpose and the function of x ;
  • agentive : the factors involved in the origin of x .

Take together, these qualia form the “qualia structure” of a word. For example, the qualia structure of the noun ‘sandwich’ will feature information about the composition of sandwiches, their nature of physical artifacts, their being intended to be eaten, and our knowledge about the operations typically involved in the preparation of sandwiches. The notation is as follows.

sandwich ( x ) const = {bread, …} form = physobj( x ) tel = eat(P, g , x ) agent = artifact( x )

Qualia structure is the primary explanatory device by which GL accounts for polysemy. The sentence “Mary finished the sandwich” receives the default interpretation “Mary finished eating the sandwich” because the argument structure of ‘finish’ requires an action as direct object, and the qualia structure of ‘sandwich’ allows the generation of the appropriate sense via type coercion (Pustejovsky 2006). GL is an ongoing research program (Pustejovsky et al. 2012) that has led to significant applications in computational linguistics (e.g., Pustejovsky & Jezek 2008; Pustejovsky & Rumshisky 2008). But like the theories mentioned so far, it has been subject to criticisms. A first general criticism is that the decompositional assumptions underlying GL are unwarranted and should be replaced by an atomist view of word meaning (Fodor & Lepore 1998; see Pustejovsky 1998 for a reply). A second criticism is that GL’s focus on variations in word meaning which depend on sentential context and qualia structure is too narrow, since since contextual variations in word meaning often depend on more complex factors, such as the ability to keep track of coherence relations in a discourse (e.g., Asher & Lascarides 1995; Lascarides & Copestake 1998; Kehler 2002; Asher 2011). Finally, the empirical adequacy of the framework has been called into question. It has been argued that the formal apparatus of GL leads to incorrect predictions, that qualia structure sometimes overgenerates or undergenerates interpretations, and that the rich lexical entries postulated by GL are psychologically implausible (e.g., Jayez 2001; Blutner 2002).

To conclude this section, we will briefly mention some contemporary approaches to word meaning that, in different ways, pursue the theoretical agenda of the relational current of the structuralist paradigm. For pedagogical convenience, we can group them into two categories. On the one hand, we have network approaches, which formalize knowledge of word meaning within models where the lexicon is seen as a structured system of entries interconnected by sense relations such as synonymy, antonymy, and meronymy. On the other, we have statistical approaches, whose primary aim is to investigate the patterns of co-occurrence among words in linguistic corpora.

The main example of network approaches is perhaps Collins and Quillian’s (1969) hierarchical network model, in which words are represented as entries in a network of nodes, each comprising a set of conceptual features defining the conventional meaning of the word in question, and connected to other nodes in the network through semantic relations (more in Lehman 1992). Subsequent developments of the hierarchical network model include the Semantic Feature Model (Smith, Shoben & Rips 1974), the Spreading Activation Model (Collins & Loftus 1975; Bock & Levelt 1994), the WordNet database (Fellbaum 1998), as well as the connectionist models of Seidenberg & McClelland (1989), Hinton & Shallice (1991), and Plaut & Shallice (1993). More on this in the entry on connectionism .

Finally, statistical analysis investigates word meaning by examining through computational means the distribution of words in linguistic corpora. The main idea is to use quantitative data about the frequency of co-occurrence of sets of lexical items to identify their semantic properties and differentiate their different senses (for overviews, see Atkins & Zampolli 1994; Manning & Schütze 1999; Stubbs 2002; Sinclair 2004). Notice that while symbolic networks are models of the architecture of the lexicon that seek to be psychologically adequate (i.e., to reveal how knowledge of word meaning is stored and organized in the mind/brain of human speakers), statistical approaches to word meaning are not necessarily interested in psychological adequacy, and may have completely different goals, such as building a machine translation service able to mimic human performance (a goal that can obviously be achieved without reproducing the cognitive mechanisms underlying translation in humans). More on this in the entry on computational linguistics .

5. Cognitive Science

As we have seen, most theories of word meaning in linguistics face, at some point, the difficulties involved in drawing a plausible dividing line between word knowledge and world knowledge, and the various ways they attempt to meet this challenge display some recurrent features. For example, they assume that the lexicon, though richly interfaced with world knowledge and non-linguistic cognition, remains an autonomous representational system encoding a specialized body of linguistic knowledge. In this section, we survey a group of empirical approaches that adopt a different stance on word meaning. The focus is once again psychological, which means that the overall goal of these approaches is to provide a cognitively realistic account of the representational repertoire underlying knowledge of word meaning. Unlike the approaches surveyed in Section 4 , however, these theories tend to encourage a view on which the distinction between the semantic and pragmatic aspects of word meaning is highly unstable (or even impossible to draw), where lexical knowledge and knowledge of worldly facts are aspects of a continuum, and where the lexicon is permeated by our general inferential abilities (Evans 2010). Section 5.1 will briefly illustrate the central assumptions underlying the study of word meaning in cognitive linguistics. Section 5.2 will turn to the study of word meaning in psycholinguistics. Section 5.3 will conclude with some references to neurolinguistics.

At the beginning of the 1970s, Eleanor Rosch put forth a new theory of the mental representation of categories. Concepts such as furniture or bird , she claimed, are not represented just as sets of criterial features with clear-cut boundaries, so that an item can be conceived as falling or not falling under the concept based on whether or not it meets the relevant criteria. Rather, items within categories can be considered more or less representative of the category itself (Rosch 1975; Rosch & Mervis 1975; Mervis & Rosch 1981). Several experiments seemed to show that the application of concepts is no simple yes-or-no business: some items (the “good examples”) are more easily identified as falling under a concept than others (the “poor examples”). An automobile is perceived as a better example of vehicle than a rowboat, and much better than an elevator; a carrot is more readily identified as an example of the concept vegetable than a pumpkin. If the concepts speakers associate to category words (such as ‘vehicle’ and ‘vegetable’) were mere bundles of criterial features, these preferences would be inexplicable, since they rank items that meet the criteria equally well. It is thus plausible to assume that the concepts associated to category words are have a center-periphery architecture centered on the most representative examples of the category: a robin is perceived as a more “birdish” bird than an ostrich or, as people would say, closer to the prototype of a bird or to the prototypical bird (see the entry on concepts ).

Although nothing in Rosch’s experiments licensed the conclusion that prototypical rankings should be reified and treated as the content of concepts (what her experiments did support was merely that a theory of the mental representation of categories should be consistent with the existence of prototype effects ), the study of prototypes revolutionized the existing approaches to category concepts (Murphy 2002) and was a leading force behind the birth of cognitive linguistics. Prototypes were central to the development of the Radial Network Theory of Brugman (1988 [1981]) and Lakoff (Brugman & Lakoff 1988), which proposed to model the sense network of words by introducing in the architecture of word meanings the center-periphery relation at the heart of Rosch’s seminal work. According to Brugman, word meanings can typically be modeled as radial complexes where a dominant sense is related to less typical senses by means of semantic relations such as metaphor and metonymy. For example, the sense network of ‘fruit’ features product of plant growth at its center and a more abstract outcome at its periphery, and the two are connected by a metaphorical relation). On a similar note, the Conceptual Metaphor Theory of Lakoff & Johnson (1980; Lakoff 1987) and the Mental Spaces Approach of Fauconnier (1994; Fauconnier & Turner 1998) combined the assumption that word meanings typically have an internal structure arranging multiple related senses in a radial fashion, with the further claim that our use of words is governed by hard-wired mapping mechanisms that catalyze the integration of word meanings across conceptual domains. For example, it is in virtue of these mechanisms that the expressions “love is war”, “life is a journey”) are so widespread across cultures and sound so natural to our ears. On the proposed view, these associations are creative, perceptually grounded, systematic, cross-culturally uniform, and grounded on pre-linguistic patterns of conceptual activity which correlate with core elements of human embodied experience (see the entries on metaphor and embodied cognition ). More in Kövecses (2002), Gibbs (2008), and Dancygier & Sweetser (2014).

Another major innovation introduced by cognitive linguistics is the development of a resolutely “encyclopedic” approach to word meaning, best exemplified by Frame Semantics (Fillmore 1975, 1982) and by the Theory of Domains (Langacker 1987). Approximating a bit, an approach to word meaning can be defined “encyclopedic” insofar as it characterizes knowledge of worldly facts as the primary constitutive force of word meaning. While the Mental Spaces Approach and Conceptual Metaphor Theory regarded word meaning mainly as the product of associative patterns between concepts, Fillmore and Langacker turned their attention to the relation between word meaning and the body of encyclopedic knowledge possessed by typical speakers. Our ability to use and interpret the verb ‘buy’, for example, is closely intertwined with our background knowledge of the social nature of commercial transfer, which involves a seller, a buyer, goods, money, the relation between the money and the goods, and so forth. However, knowledge structures of this kind cannot be modeled as standard concept-like representations. Here is how Frame Semantics attempts to meet the challenge. First, words are construed as pairs of phonographic forms with highly schematic concepts which are internally organized as radial categories and function as access sites to encyclopedic knowledge. Second, an account of the representational organization of encyclopedic knowledge is provided. According to Fillmore, encyclopedic knowledge is represented in long-term memory in the form of frames , i.e., schematic conceptual scenarios that specify the prototypical features and functions of a denotatum, along with its interactions with the objects and the events typically associated with it. Frames provide thus a schematic representation of the elements and entities associated with a particular domain of experience and convey the information required to use and interpret the words employed to talk about it. For example, according to Fillmore & Atkins (1992) the use of the verb ‘bet’ is governed by the risk frame, which is as follows:

In the same vein as Frame Semantics (more on the parallels in Clausner & Croft 1999), Langacker’s Theory of Domains argues that our understanding of word meaning depends on our access to larger knowledge structures called domains . To illustrate the notion of a domain, consider the word ‘diameter’. The meaning of this word cannot be grasped independently of a prior understanding of the notion of a circle. According to Langacker, word meaning is precisely a matter of “profile-domain” organization: the profile corresponds to a substructural element designated within a relevant macrostructure, whereas the domain corresponds to the macrostructure providing the background information against which the profile can be interpreted (Taylor 2002). In the diameter/circle example, ‘diameter’ designates a profile in the circle domain. Similarly, expressions like ‘hot’, ‘cold’, and ‘warm’ designate properties in the temperature domain. Langacker argues that domains are typically structured into hierarchies that reflect meronymic relations and provide a basic conceptual ontology for language use. For example, the meaning of ‘elbow’ is understood with respect to the arm domain, while the meaning of ‘arm’ is situated within the body domain. Importantly, individual profiles typically inhere to different domains, and this is one of the factors responsible for the ubiquity of polysemy in natural language. For example, the profile associated to the word ‘love’ inheres both to the domains of embodied experience and to the abstract domains of social activities such as marriage ceremonies.

Developments of the approach to word meaning fostered by cognitive linguistics include Construction Grammar (Goldberg 1995), Embodied Construction Grammar (Bergen & Chang 2005), Invited Inferencing Theory (Traugott & Dasher 2001), and LCCM Theory (Evans 2009). The notion of a frame has become popular in cognitive psychology to model the dynamics of ad hoc categorization (e.g., Barsalou 1983, 1992, 1999; more in Section 5.2 ). General information about the study of word meaning in cognitive linguistics can be found in Talmy (2000a,b), Croft & Cruse (2004), and Evans & Green (2006).

In psycholinguistics, the study of word meaning is understood as the investigation of the mental lexicon , the cognitive system that underlies the capacity for conscious and unconscious lexical activity (Jarema & Libben 2007). Simply put, the mental lexicon is the long-term representational inventory storing the body of linguistic knowledge speakers are required to master in order to make competent use of the lexical elements of a language; as such, it can be equated with the lexical component of an individual’s language capacity. Research on the mental lexicon is concerned with a variety of problems (for surveys, see, e.g., Traxler & Gernsbacher 2006, Spivey, McRae & Joanisse 2012, Harley 2014), that center around the following tasks:

  • Define the overall organization of the mental lexicon, specify its components and clarify the role played by such components in lexical production and comprehension;
  • Determine the internal makeup of single components and the way the information they store is brought to bear on lexical performance;
  • Describe the interface mechanisms connecting the mental lexicon to other domains in the human cognitive architecture (e.g., declarative memory);
  • Illustrate the learning processes responsible for the acquisition and the development of lexical abilities.

From a functional point of view, the mental lexicon is usually understood as a system of lexical entries , each containing the information related to a word mastered by a speaker (Rapp 2001). A lexical entry for a word w is typically modeled as a complex representation made up of the following components (Levelt 1989, 2001):

  • A semantic form , determining the semantic contribution made by w to the meaning of sentences containing w ;
  • A grammatical form , assigning w to a grammatical category (noun, verb, adjective) and regulating the behavior of w in syntactic environments;
  • A morphological form , representing the morphemic substructure of w and the morphological operations that can be applied on w ;
  • A phonological form , specifying the set of phonological properties of w ;
  • An orthographic form , specifying the graphic structure of w .

From this standpoint, a theory of word meaning translates into an account of the information stored in the semantic form of lexical entries. A crucial part of the task consists in determining exactly what kind of information is stored in lexical semantic forms as opposed to, e.g., bits of information that fall under the scope of episodic memory or general factual knowledge. Recall the example we made in Section 3.3 : how much of the information that a competent zoologist can associate to tigers is part of her knowledge of the meaning of the word ‘tiger’? Not surprisingly, even in psycholinguistics tracing a neat functional separation between word processing and general-purpose cognition has proven a problematic task. The general consensus among psycholinguists seems to be that lexical representations and conceptual representations are richly interfaced, though functionally distinct (e.g., Gleitman & Papafragou 2013). For example, in clinical research it is standard practice to distinguish between amodal deficits involving an inability to process information at both the conceptual and the lexical level, and modal deficits specifically restricted to one of the two spheres (Saffran & Schwartz 1994; Rapp & Goldrick 2006; Jefferies & Lambon Ralph 2006; more in more in Section 5.3 ). On the resulting view, lexical activity in humans is the output of the interaction between two functionally neighboring systems, one broadly in charge of the storage and processing of conceptual-encyclopedic knowledge, the other coinciding with the mental lexicon. The role of lexical entries is essentially to make these two systems communicate with one another through semantic forms (see Denes 2009). Contrary to the folk notion of a mental lexicon where words are associated to fully specified meanings or senses which are simply retrieved from the lexicon for the purpose of language processing, in these models lexical semantic forms are seen as highly schematic representations whose primary function is to supervise the recruitment of the extra-linguistic information required to interpret word occurrences in language use. In recent years, appeals to “ultra-thin” lexical entries have taken an eliminativist turn. It has been suggested that psycholinguistic accounts of the representational underpinnigs of lexical competence should dispose of the largely metaphorical notion of an “internal word store”, and there is no such thing as a mental lexicon in the human mind (e.g., Elman 2004, 2009; Dilkina, McClelland & Plaut 2010).

In addition to these approaches, in a number of prominent psychological accounts emerged over the last two decades, the study of word meaning is essentially considered a chapter of theories of the mental realization of concepts (see the entry on concepts ). Lexical units are seen either as ingredients of conceptual networks or as (auditory or visual) stimuli providing access to conceptual networks. A flow of neuroscientific results has shown that understanding of (certain categories of) words correlates with neural activations corresponding to the semantic content of the processed words. For example, it has been shown that listening to sentences that describe actions performed with the mouth, hand, or leg activates the visuomotor circuits which subserve execution and observation of such actions (Tettamanti et al. 2005); that reading words denoting specific actions of the tongue (‘lick’), fingers (‘pick’), and leg (‘kick’) differentially activate areas of the premotor cortex that are active when the corresponding movements are actually performed (Hauk et al. 2004); that reading odor-related words (‘jasmine’, ‘garlic’, ‘cinnamon’) differentially activates the primary olfactory cortex (Gonzales et al. 2006); and that color words (such as ‘red’) activate areas in the fusiform gyrus that have been associated with color perception (Chao et al. 1999, Simmons et al. 2007; for a survey of results on visual activations in language processing, see Martin 2007).

This body of research originated so-called simulationist (or enactivist ) accounts of conceptual competence, on which “understanding is imagination” and “imagining is a form of simulation” (Gallese & Lakoff 2005). In these accounts, conceptual (often called “semantic”) competence is seen as the ability to simulate or re-enact perceptual (including proprioceptive and introspective) experiences of the states of affairs that language describes, by manipulating memory traces of such experiences or fragments of them. In Barsalou’s theory of perceptual symbol systems (1999), language understanding (and cognition in general) is based on perceptual experience and memory of it. The central claim is that “sensory-motor systems represent not only perceived entities but also conceptualizations of them in their absence”. Perception generates mostly unconscious “neural representations in sensory-motor areas of the brain”, which represent schematic components of perceptual experience. Such perceptual symbols are not holistic copies of experiences but selections of information isolated by attention. Related perceptual symbols are integrated into a simulator that produces limitless simulations of a perceptual component, such as red or lift . Simulators are located in long-term memory and play the roles traditionally attributed to concepts: they generate inferences and can be combined recursively to implement productivity. A concept is not “a static amodal structure” as in traditional, computationally-oriented cognitive science, but “the ability to simulate a kind of thing perceptually”. Linguistic symbols (i.e., auditory or visual memories of words) get to be associated with simulators; perceptual recognition of a word activates the relevant simulator, which simulates a referent for the word; syntax provides instructions for building integrated perceptual simulations, which “constitute semantic interpretations”.

Though popular among researchers interested in the conceptual underpinnings of semantic competence, the simulationist paradigm faces important challenges. Three are worth mentioning. First, it appears that imulations do not always capture the intuitive truth conditions of sentences: listeners may enact the same simulation upon exposure to sentences that have different truth conditions (e.g., “The man stood on the corner” vs. “The man waited on the corner”; see Weiskopf 2010). Moreover, simulations may overconstrain truth conditions. For example, even though in the simulations listeners typically associate to the sentence “There are three pencils and four pens in Anna’s mug”, the pens and the pencils are in vertical position, the sentence would be true even if they were lying horizontally in the mug. Second, the framework does not sit well with pathological data. For example, no general impairment with auditory-related words is reported in patients with lesions in the auditory association cortex (e.g., auditory agnosia patients); analogously, patients with damage to the motor cortex seem to have no difficulties in linguistic performance, and specifically in inferential processing with motor-related words (for a survey of these results, see Calzavarini, to appear; for a defense of the embodied paradigm, Pulvermüller 2013). Finally, the theory has difficulties accounting for the meaning of abstract words (e.g., ‘beauty’, ‘pride’, ‘kindness’), which does not appear to hinge on sensory-motor simulation (see Dove 2016 for a discussion).

Beginning in the mid-1970s, neuropsychological research on cognitive deficits related to brain lesions has produced a considerable amount of findings related to the neural correlates of lexical semantic information and processing. More recently, the development of neuroimaging techniques such as PET, fMRI and ERP has provided further means to adjudicate hypotheses about lexical semantic processes in the brain (Vigneau et al. 2006). Here we do not intend to provide a complete overview of such results (for a survey, see Faust 2012). We shall just mention three topics of neurolinguistic research that appear to bear on issues in the study of word meaning: the partition of the lexicon into categories, the representation of common nouns vs. proper names, and the distinction between the inferential and the referential aspects of lexical competence.

Two preliminary considerations should be kept in mind. First, a distinction must be drawn between the neural realization of word forms, i.e., traces of acoustic, articulatory, graphic, and motor configurations (‘peripheral lexicons’), and the neural correlates of lexical meanings (‘concepts’). A patient can understand what is the object represented by a picture shown to her (and give evidence of her understanding, e.g., by miming the object’s function) while being unable to retrieve the relevant phonological form from her output lexicon (Warrington 1985; Shallice 1988). Second, there appears to be wide consensus about the irrelevance to brain processing of any distinction between strictly semantic and factual or encyclopedic information (e.g., Tulving 1972; Sartori et al. 1994). Whatever information is relevant to such processes as object recognition or confrontation naming is standardly characterized as ‘semantic’. This may be taken as a stipulation—it is just how neuroscientists use the word ‘semantic’—or as deriving from lack of evidence for any segregation between the domains of semantic and encyclopedic information (see Binder et al. 2009). Be that as it may, in present-day neuroscience there seems to be no room for a correlate of the analytic/synthetic distinction. Moreover, in the literature ‘semantic’ and ‘conceptual’ are often used synonymously; hence, no distinction is drawn between lexical semantic and conceptual knowledge. Finally, the focus of neuroscientific research on “semantics” is on information structures roughly corresponding to word-level meanings, not to sentence-level meanings: hence, so far neuroscientific research has had little to say about the compositional mechanisms that have been the focus (and, often, the entire content) of theories of meaning as pursued within formal semantics and philosophy of language.

Let us start with the partition of the semantic lexicon into categories. Neuropsychological research indicates that the ability to name objects or to answer simple questions involving such nouns can be selectively lost or preserved: subjects can perform much better in naming living entities than in naming artifacts, or in naming animate living entities than in naming fruits and vegetables (Shallice 1988). Different patterns of brain activation may correspond to such dissociations between performances: e.g., Damasio et al. (1996) found that retrieval of names of animals and of tools activate different regions in the left temporal lobe. However, the details of this partition have been interpreted in different ways. Warrington & McCarthy (1983) and Warrington & Shallice (1984) explained the living vs. artifactual dissociation by taking the category distinction to be an effect of the difference among features that are crucial in the identification of living entities and artifacts: while living entities are identified mainly on the basis of perceptual features, artifacts are identified by their function. A later theory (Caramazza & Shelton 1998) claimed that animate and inanimate objects are treated by different knowledge systems separated by evolutionary pressure: domains of features pertaining to the recognition of living things, human faces, and perhaps tools may have been singled out as recognition of such entities had survival value for humans. Finally, Devlin et al. (1998) proposed to view the partition as the consequence of a difference in how recognition-relevant features are connected with one another: in the case of artifactual kinds, an object is recognized thanks to a characteristic coupling of form and function, whereas no such coupling individuates kinds of living things (e.g., eyes go with seeing in many animal species). For non-neutral surveys, see Caramazza & Mahon (2006) and Shallice & Cooper (2011).

On the other hand, it is also known that “semantic” (i.e., conceptual) competence may be lost in its entirety (though often gradually). This is what typically happens in semantic dementia. Empirical evidence has motivated theories of the neural realization of conceptual competence that are meant to account for both modality-specific deficits and pathologies that involve impairment across all modalities. The former may involve a difficulty or impossibility to categorize a visually exhibited object which, however, can be correctly categorized in other modalities (e.g., if the object is touched) or verbally described on the basis of the object’s name (i.e., on the basis of the lexical item supposedly associated with the category). The original “hub and spokes” model of the brain representation of concepts (Rogers et al. 2004, Patterson et al. 2007) accounted for both sets of findings by postulating that the semantic network is composed of a series of “spokes”, i.e., cortical areas distributed across the brain processing modality-specific (visual, auditory, motor, as well as verbal) sources of information, and that the spokes are two-ways connected to a transmodal “hub”. While damage to the spokes accounts for modality-specific deficits, damage to the hub and its connections explains the overall impairment of semantic competence. On this model, the hub is supposed to be located in the anterior temporal lobe (ATL), since semantic dementia had been found to be associated with degeneration of the anterior ventral and polar regions of both temporal poles (Guo et al. 2013). According to more recent, “graded” versions of the model (Lambon Ralph et al. 2017), the contribution of the hub units may vary depending on different patterns of connectivity to the spokes, to account for evidence of graded variation of function across subregions of ATL. It should be noted that while many researchers converge on a distributed view of semantic representation and on the role of domain-specific parts of the neural network (depending on differential patterns of functional connectivity), not everybody agrees on the need to postulate a transmodal hub (see, e.g., Mahon & Caramazza 2011).

Let us now turn to common nouns and proper names. As we have seen, in the philosophy of language of the last decades, proper names (of people, landmarks, countries, etc.) have being regarded as semantically different from common nouns. Neuroscientific research on the processing of proper names and common nouns concurs, to some extent. To begin with, the retrieval of proper names is doubly dissociated from the retrieval of common nouns. Some patients proved competent with common nouns but unable to associate names to pictures of famous people, or buildings, or brands (Ellis, Young & Critchley 1989); in other cases, people’s names were specifically affected (McKenna & Warrington 1980). Other patients had the complementary deficit. The patient described in Semenza & Sgaramella (1993) could name no objects at all (with or without phonemic cues) but he was able to name 10 out of 10 familiar people, and 18 out of 22 famous people with a phonemic cue. Martins & Farrayota‘s (2007) patient ACB also presented impaired object naming but spared retrieval of proper names. Such findings suggest distinct neural pathways for the retrieval of proper names and common nouns (Semenza 2006). The study of lesions and neuroimaging research both initially converged in identifying the left temporal pole as playing a crucial role in the retrieval of proper names, from both visual stimuli (Damasio et al. 1996) and the presentation of speaker voices (Waldron et al. 2014) (though in at least one case damage to the left temporal pole was associated with selective sparing of proper names; see Martins & Farrajota 2007). In addition, recent research has found a role for the uncinate fasciculus (UF). In patients undergoing surgical removal of UF, retrieval of common nouns was recovered while retrieval of proper names remained impaired (Papagno et al. 2016). The present consensus appears to be that “the production of proper names recruits a network that involves at least the left anterior temporal lobe and the left orbitofrontal cortex connected together by the UF” (Brédart 2017).

Furthermore, a few neuropsychological studies have described patients whose competence on geographical names was preserved while names of people were lost: one patient had preserved country names, though he had lost virtually every other linguistic ability (McKenna & Warrington 1978; see Semenza 2006 for other cases of selective preservation of geographical names). Other behavioral experiments seem to show that country names are closer to common nouns than to other proper names such as people and landmark names in that the connectivity between the word and the conceptual system is likely to require diffuse multiple connections, as with common nouns (Hollis & Valentine 2001). If these results were confirmed, it would turn out that the linguistic category of proper names is not homogeneous in terms of neural processing. Studies have also demonstrated that the retrieval of proper names from memory is typically a more difficult cognitive task than the retrieval of common nouns. For example, it is harder to name faces (of famous people) than to name objects; moreover, it is easier to remember a person’s occupation than her or his name. Interestingly, the same difference does not materialize in definition naming, i.e., in tasks where names and common nouns are to be retrieved from definitions (Hanley 2011). Though several hypotheses about the source of this difference have been proposed (see Brédart 2017 for a survey), no consensus has been reached on how to explain this phenomenon.

Finally, a few words on the distinction between the inferential and the referential component of lexical competence. As we have seen in Section 3.2 , Marconi (1997) suggested that processing of lexical meaning might be distributed between two subsystems, an inferential and a referential one. Beginning with Warrington (1975), many patients had been described that were more or less severely impaired in referential tasks such as naming from vision (and other perceptual modalities as well), while their inferential competence was more or less intact. The complementary pattern (i.e., the preservation of referential abilities with loss of inferential competence) is definitely less common. Still, a number of cases have been reported, beginning with a stroke patient of Heilman et al. (1976), who, while unable to perform any task requiring inferential processing, performed well in referential naming tasks with visually presented objects (he could name 23 of 25 common objects). In subsequent years, further cases were described. For example, in a study of 61 patients with lesions affecting linguistic abilities, Kemmerer et al. (2012) found 14 cases in which referential abilities were better preserved than inferential abilities. More recently, Pandey & Heilman (2014), while describing one more case of preserved (referential) naming from vision with severely impaired (inferential) naming from definition, hypothesized that “these two naming tasks may, at least in part, be mediated by two independent neuronal networks”. Thus, while double dissociation between inferential processes and naming from vision is well attested, it is not equally clear that it involves referential processes in general. On the other hand, evidence from neuroimaging is, so far, limited and overall inconclusive. Some neuroimaging studies (e.g., Tomaszewski-Farias et al. 2005, Marconi et al. 2013), as well as TMS mapping experiments (Hamberger et al. 2001, Hamberger & Seidel 2009) did find different patterns of activation for inferential vs. referential performances. However, the results are not entirely consistent and are liable to different interpretations. For example, the selective activation of the anterior left temporal lobe in inferential performances may well reflect additional syntactic demands involved in definition naming, rather than be due to inferential processing as such (see Calzavarini 2017 for a discussion).

  • Aitchison, J., 2012, Words in the Mind: An Introduction to the Mental Lexicon , 4 th edn., London: Wiley-Blackwell.
  • Allan, K. (ed.), 2013, The Oxford Handbook of the History of Linguistics , Oxford: Oxford University Press.
  • Allot, N. and M. Textor, 2012, “Lexical Pragmatic Adjustment and the Nature of Ad Hoc Concepts”, International Review of Pragmatics , 4: 185–208.
  • Alward, P., 2005, “Between the Lines of Age: Reflections on the Metaphysics of Words”, Pacific Philosophical Quarterly , 86: 172–187.
  • Asher, N., 2011, Lexical Meaning in Context: A Web of Words , Cambridge: Cambridge University Press.
  • Asher, N. and A. Lascarides, 1995, “Lexical Disambiguation in a Discourse Context”, Journal of Semantics , 12: 69–108.
  • Atkins, B.T.S. and A. Zampolli (eds.), 1994, Computational Approaches to the Lexicon , Oxford: Oxford University Press.
  • Bach, K., 1994, “Conversational Impliciture”, Mind and Language , 9: 124–162
  • Bahr, A., M. Carrara, and L. Jansen, 2019, “Functions and Kinds of Art Works and Other Artifacts: An Introduction”, Grazer Philosophische Studien , 96: 1–18
  • Barsalou, L., 1983, “Ad Hoc Categories”, Memory and Cognition , 11: 211–227.
  • –––, 1992, “Frames, Concepts and Conceptual Fields”, in Lehrer and Kittay 1992: 21–74.
  • –––, 1999, “Perceptual Symbol Systems”, Behavioral and Brain Sciences , 22: 577–609.
  • Béjoint, H., 2000, Modern Lexicography: An Introduction , Oxford: Oxford University Press.
  • Bergen, B.K. and N. Chang, 2005, “Embodied Construction Grammar in Simulation-Based Language Understanding”, in J.-O. Östman and M. Fried (eds.), Construction Grammars: Cognitive Grounding and Theoretical Extensions , Amsterdam: Benjamins, 147–190.
  • Bezuidenhout, A., 2002, “Truth-Conditional Pragmatics”, Philosophical Perspectives , 16: 105–134.
  • Bierwisch, M., 1983b, “Major Aspects of the Psychology of Language”, Linguistische Studien , 114: 1–38.
  • –––, 1983a, “Formal and Lexical Semantics”, Linguistische Studien , 114: 56–79.
  • Bierwisch, M. and E. Lang (eds.), 1989, Dimensional Adjectives: Grammatical Structure and Conceptual Interpretation , Berlin: Springer.
  • Bilgrami, A., 1992, Meaning and Belief , Oxford: Blackwell.
  • Binder, J.R., R.H. Desai, W.W. Graves, and L.L. Conant, 2009, “Where Is the Semantic System? A Critical Review and Meta-Analysis of 120 Functional Neuroimaging Studies”, Cerebral Cortex , 19: 2767–2796.
  • Block, N., 1986, “An Advertisement for a Semantics for Psychology”, Midwest Studies in Philosophy , 10: 615–678.
  • Blutner, R., 2002, “Lexical Semantics and Pragmatics”, Linguistische Berichte , 10: 27–58.
  • Bock, K. and W.J.M. Levelt, 1994, “Language Production: Grammatical Encoding”, in M.A. Gernsbacher (ed.), Handbook of Psycholinguistics , San Diego, CA: Academic Press, 945–984.
  • Bohnemeyer, J., 2003, “NSM without the Strong Lexicalization Hypothesis”, Theoretical Linguistics , 29: 211–222.
  • Bonomi, A., 1983, “Linguistica e Logica”, in C. Segre (ed.), Intorno alla Linguistica , Milan: Feltrinelli, 425–453.
  • Booij, G., 2007, The Grammar of Words: An Introduction to Linguistic Morphology , 2 nd edition, Oxford: Oxford University Press.
  • Borg, E., 2004, Minimal Semantics , Oxford: Oxford University Press.
  • –––, 2012, Pursuing Meaning , Oxford: Oxford University Press.
  • Bréal, M., 1924 [1897], Essai de Sémantique: Science des Significations , Paris: Gérard Monfort.
  • Brédart, S., 2017, “The Cognitive Psychology and Neuroscience of Naming People”, Neuroscience & Biobehavioural Reviews , 83: 145–154.
  • Bromberger, S., 2011, “What are Words? Comments on Kaplan (1990), on Hawthorne and Lepore, and on the Issue”, Journal of Philosophy , 108: 485–503.
  • Brugman, C., 1988 [1981], The Story of “Over”: Polysemy, Semantics and the Structure of the Lexicon , New York, NY: Garland.
  • Brugman, C. and G. Lakoff, 1988, “Cognitive Topology and Lexical Networks”, in S. Small, G. Cottrell and M. Tannenhaus (eds.), Lexical Ambiguity Resolution , San Mateo, CA: Morgan Kaufman, 477–508.
  • Burge, T., 1979, “Individualism and the Mental”, Midwest Studies in Philosophy , 6: 73–121.
  • –––, 1993, “Concepts, Definitions, and Meaning”, Metaphilosophy , 24: 309–25.
  • Calzavarini, F., 2017, “Inferential and Referential Lexical Semantic Competence: A Critical Review of the Supporting Evidence”, Journal of Neurolinguistics , 44: 163–189.
  • –––, forthcoming, Brain and the Lexicon , New York: Springer.
  • Cappelen, H., 1999, “Intentions in Words”, Noûs , 33: 92–102.
  • Cappelen, H. and E. Lepore, 2005, Insensitive Semantics: A Defense of Semantic Minimalism and Speech Act Pluralism , Oxford: Blackwell.
  • Caramazza, A. and J. Shelton, 1998, “Domain Specific Knowledge Systems in the Brain: The Animate-Inanimate Distinction”, Journal of Cognitive Neuroscience , 10: 1–34.
  • Caramazza, A. and B.Z. Mahon, 2006, “The Organization of Conceptual Knowledge in the Brain: The Future’s Past and Some Future Directions”, Cognitive Neuropsychology , 23: 13–38.
  • Carnap, R., 1947, Meaning and Necessity , Chicago, IL: University of Chicago Press.
  • –––, 1952, “Meaning Postulates”, Philosophical Studies , 3: 65–73.
  • –––, 1955, “Meaning and Synonymy in Natural Languages”, Philosophical Studies , 6: 33–47.
  • Carston, R., 2002, Thoughts and Utterances , Oxford: Blackwell.
  • Chalmers, D., 1996, The Conscious Mind , Oxford: Oxford University Press.
  • –––, 2002, “On Sense and Intension”, Nous 36 (Suppl. 16): 135–182.
  • Chao, L.L., J.V. Haxby, and A. Martin, 1999, “Attribute-Based Neural Substrates in Temporal Cortex for Perceiving and Knowing about Objects”, Nature Neuroscience , 2: 913–919.
  • Chierchia, G. and S. McConnell-Ginet, 2000, Meaning and Grammar: An Introduction to Semantics , 2 nd edn., Cambridge, MA: MIT Press.
  • Chomsky, N., 1957, Syntactic Structures , The Hague: Mouton.
  • –––, 1965, Aspects of the Theory of Syntax , Cambridge, MA: MIT Press.
  • –––, 2000, New Horizons in the Study of Language and Mind , Cambridge: Cambridge University Press.
  • Church, A., 1951, “A Formulation of the Logic of Sense and Denotation”, in P. Henle, H.M. Kallen, and S.K. Langer (eds.), Structure, Method and Meaning , New York, NY: Liberal Arts Press, 3–24.
  • Clausner, T.C. and W. Croft, 1999, “Domains and Image Schemas”, Cognitive Linguistics , 10: 1–31.
  • Collins, A.M. and M.R. Quillian, 1969, “Retrieval Time from Semantic Memory”, Journal of Verbal Learning & Verbal Behavior , 8: 240–247.
  • Collins, A.M. and E.F. Loftus, 1975, “A Spreading-Activation Theory of Semantic Processing”, Psychological Review , 82: 407–428.
  • Croft, W. and D.A. Cruse, 2004, Cognitive Linguistics , Cambridge: Cambridge University Press.
  • Cruse, A.D., 1986, Lexical Semantics , Cambridge: Cambridge University Press.
  • Damasio, H., T.J. Grabowski, D. Tranel, R.D. Hitchwa, and A.R. Damasio, 1996, “A Neural Basis for Lexical Retrieval”, Nature , 380: 499–505.
  • Davidson, D., 1967, “Truth and Meaning”, Synthese , 17: 304–323.
  • –––, 1984, Inquiries into Truth and Interpretation , Oxford: Oxford University Press.
  • Davidson, D. and G. Harman (eds.), 1972, Semantics of Natural Language , Dordrecht: Reidel.
  • Dancygier, B. and E. Sweetser, 2014, Figurative Language , Cambridge: Cambridge University Press.
  • Deane, P.D., 1996, “On Jackendoff’s Conceptual Semantics”, Cognitive Linguistics , 7: 35–92.
  • Del Bello, D., 2007, Forgotten Paths: Etymology and the Allegorical Mindset , Washington, D.C.: Catholic University of America Press.
  • Del Pinal, G., 2018, “Meaning, Modulation, and Context: A Multidimensional Semantics for Truth-conditional Pragmatics”, Linguistics & Philosophy , 41: 165–207.
  • Denes, G., 2009, Talking Heads: The Neuroscience of Language , New York, NY: Psychology Press.
  • Devitt, M., 1983, “Dummett’s Anti-Realism”, Journal of Philosophy , 80: 73–99.
  • Devitt, M. and K. Sterelny, 1987, Language and Reality: An Introduction to the Philosophy of Language , Oxford: Blackwell.
  • Devlin, J.T., L.M. Gonnerman, E.S. Andersen, and M.S. Seidenberg, 1998, “Category Specific Semantic Deficits in Focal and Widespread Brain Damage: A Computational Account”, Journal of Cognitive Neuroscience , 10: 77–94.
  • Dilkina, K., J.L. McClelland, and D.C. Plaut, 2010, “Are There Mental Lexicons? The Role of Semantics in Lexical Decision”, Brain Research , 1365: 66–81.
  • Di Sciullo, A.-M. and E. Williams, 1987, On the Definition of Word , Cambridge, MA: MIT Press.
  • Dove, G., 2016, “Three Symbol Ungrounding Problems: Abstract Concepts and the Future of Embodied Cognition”, Psychonomic Bulletin & Review , 23: 1109–1121.
  • Dowty, D.R., 1979, Word Meaning and Montague Grammar , Dordrecht: Reidel.
  • Dummett, M., 1976, “What Is a Theory of Meaning?”, in S. Guttenplan (ed.), Mind and Language , Oxford: Oxford University Press, 97–138.
  • –––, 1991, The Logical Basis of Metaphysics , London: Duckworth.
  • Egré, P., 2015, “Explanation in Linguistics”, Philosophy Compass , 10: 451–462.
  • Ellis, A.W., A.W. Young, and E.M. Critchley, 1989, “Loss of Memory for People Following Temporal Lobe Damage”, Brain , 112: 1469–1483.
  • Elman, J.L., 2004, “An Alternative View of the Mental Lexicon”, Trends in Cognitive Sciences , 8: 301–306.
  • –––, 2009, “On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon”, Cognitive Science , 33: 547–582.
  • Evans, G., 1982, The Varieties of Reference , Oxford : Clarendon Press.
  • Evans, V., 2009, How Words Mean: Lexical Concepts, Cognitive Models, and Meaning Construction , Oxford: Oxford University Press.
  • –––, 2010, “Cognitive Linguistics”, in L. Cummings (ed.), The Routledge Pragmatics Encyclopedia , London: Routledge, 46–49.
  • Evans, V. and M. Green, 2006, Cognitive Linguistics: An Introduction , Edinburgh: Edinburgh University Press.
  • Evens, M.W., B.E. Litowitz, J.E. Markowitz, R.N. Smith, and O. Werner, 1980, Lexical-Semantic Relations: A Comparative Survey , Edmonton: Linguistic Research.
  • Fauconnier, G., 1994, Mental Spaces: Aspects of Meaning Construction in Natural Language , New York, NY: Cambridge University Press.
  • Fauconnier, G. and M. Turner, 1998, “Conceptual Integration Networks”, Cognitive Science , 22: 133–187.
  • Faust, M. (ed.), 2012, The Handbook of the Neuropsychology of Language , 2 vols., Oxford: Wiley Blackwell.
  • Fellbaum, C., 1998, WordNet: An Electronic Lexical Database , Cambridge, MA: MIT Press.
  • Fillmore, C., 1975, “An Alternative to Checklist Theories of Meaning”, Proceedings of the First Annual Meeting of the Berkeley Linguistics Society , Amsterdam: North Holland.
  • –––, 1982, “Frame Semantics”, in Linguistic Society of Korea (ed.), Linguistics in the Morning Calm , Seoul: Hanshin Publishing, 111–137.
  • Fillmore, C. and B.T. Atkins, 1992, “Toward a Frame-Based Lexicon: The Semantics of risk and its Neighbors”, in Lehrer and Kittay 1992: 75–102.
  • Fodor, J.A., 1983, The Modularity of Mind , Cambridge, MA: MIT Press.
  • –––, 1998, Concepts: Where Cognitive Science Went Wrong , Oxford: Oxford University Press.
  • Fodor, J.A. and E. Lepore, 1992, Holism: A Shopper’s Guide , Oxford: Blackwell.
  • –––, 1998, “The Emptiness of the Lexicon: Reflections on James Pustejovsky’s The Generative Lexicon ”, Linguistic Inquiry , 29: 269–288.
  • Fodor, J.D., 1977, Semantics: Theories of Meaning in Generative Grammar , New York, NY: Harper & Row.
  • Frege, G., 1892, “Über Sinn und Bedeutung”, Zeitschrift für Philosophie und philosophische Kritik , 100: 25–50.
  • –––, 1979a [1897], “Logic”, in Posthumous Writings , Oxford: Blackwell.
  • –––, 1979b [1914], “Logic in Mathematics”, in Posthumous Writings , Oxford: Blackwell.
  • Fumaroli, M. (ed.), 1999, Histoire de la Rhetorique dans l’Europe Moderne 1450–1950 , Paris: Presses Universitaires de France.
  • Gallese, V. and G. Lakoff, 2005, “The Brain’s Concepts: The Role of the Sensory-Motor System in Conceptual Knowledge”, Cognitive Neuropsychology , 21: 455–479.
  • Gasparri, L., 2016, “Originalism about Word Types”, Thought: A Journal of Philosophy , 5: 126–133.
  • Geeraerts, D., 2006, Words and Other Wonders: Papers on Lexical and Semantic Topics , Berlin: Mouton de Gruyter.
  • –––, 2010, Theories of Lexical Semantics , Oxford: Oxford University Press.
  • –––, 2013, “Lexical Semantics From Speculative Etymology to Structuralist Semantics”, in Allan 2013: 555–570.
  • Gibbs, R.W. Jr. (ed.), 2008, The Cambridge Handbook of Metaphor and Thought , Cambridge: Cambridge University Press.
  • Gleitman, L. and A. Papafragou, 2013, “Relations Between Language and Thought”, in D. Reisberg (ed.), The Oxford Handbook of Cognitive Psychology , Oxford: Oxford University Press, 255–275.
  • Goddard, C., 1998, “Bad Arguments Against Semantic Primes”, Theoretical Linguistics , 24: 129–156.
  • –––, 2012, “Semantic Primes, Semantic Molecules, Semantic Templates: Key Concepts in the NSM Approach to Lexical Typology”, Linguistics , 50: 711–743.
  • Goddard, C. and A. Wierzbicka (eds.), 2002, Meaning and Universal Grammar: Theory and Empirical Findings , 2 Vols., Amsterdam: Benjamins.
  • –––, 2007, “Semantic Primes and Cultural Scripts in Language Learning and Intercultural Communication”, in G. Palmer and F. Sharifian (eds.), Applied Cultural Linguistics: Implications for Second Language Learning and Intercultural Communication , Amsterdam: Benjamins, 105–124.
  • Goldberg, A., 1995, Constructions: A Construction Grammar Approach to Argument Structure , Chicago, IL: Chicago University Press.
  • Gonzales, J., et al., 2006, “Reading Cinnamon Activates Olfactory Brain Regions”, Neuroimage , 32: 906–912.
  • Gordon, W.T., 1982, A History of Semantics , Amsterdam: Benjamins.
  • Grandy, R., 1974, “Some Remarks about Logical Form”, Nous , 8: 157–164.
  • Grice, H.P., 1975, “Logic and Conversation”, in P. Cole and J.L. Morgan (eds.), Syntax and Semantics 3: Speech Acts , New York, NY: Academic Press, 41–58.
  • Guo, C.C., et al., 2013, “Anterior Temporal Lobe Degeneration Produces Widespread Network-Driven Dysfunction”, Brain , 136: 2979–2991.
  • Hamberger, M.J., R.R. Goodman, K. Perrine, and T. Tamny, 2001, “Anatomic Dissociation of Auditory and Visual Naming in the Lateral Temporal Cortex”, Neurology , 56: 56–61.
  • Hamberger, M.J. and W.T. Seidel, 2009, “Localization of Cortical Dysfunction Based on Auditory and Visual Naming Performance”, Journal of the International Neuropsychological Society , 15: 529–535.
  • Hanks, P., 2013, “Lexicography from Earliest Times to the Present”, in Allan 2013: 503–536.
  • Hanley, J.R., 2011, “Why are Names of People Associated with so many Phonological Retrieval Failures?”, Psychonomic Bulletin & Review , 18: 612–617.
  • Harley, T.A., 2014, The Psychology of Language: From Data to Theory , 4 th edn., New York, NY: Psychology Press.
  • Harnad, S., 1990, “The Symbol-grounding Problem”, Physica , D 42: 335–346.
  • Harris, R.A., 1993, The Linguistics Wars , New York, NY: Oxford University Press.
  • Hauk, O., I. Johnsrude, and F. Pulvermüller, 2004, “Somatotopic Representation of Action Words in Human Motor and Premotor Cortex”, Neuron , 41: 301–307.
  • Hawthorne, J. and E. Lepore, 2011, “On Words”, Journal of Philosophy , 108: 447–485.
  • Heilman, K.M., D.M. Tucker, and E. Valenstein, 1976, “A Case of Mixed Transcortical Aphasia with Intact Naming”, Brain , 99: 415–426.
  • Herrick, J.A., 2004, The History and Theory of Rhetoric , London: Pearson.
  • Hinton, G.E. and T. Shallice, 1991, “Lesioning an Attractor Network: Investigations of Acquired Dyslexia”, Psychological Review , 98: 74–95.
  • Hollis, J. and T. Valentine, 2001, “Proper Name Processing: Are Proper Names Pure Referencing Expressions?”, Journal of Experimental Psychology: Learning, Memory and Cognition , 27: 99–116.
  • Irmak, N., forthcoming, “An Ontology of Words”, Erkenntnis , first online 17 April 2018; doi: https://doi.org/10.1007/s10670-018-0001-0
  • Jackendoff, R.S., 1983, Semantics and Cognition , Cambridge, MA: MIT Press
  • –––, 1990, Semantic Structures , Cambridge, MA: MIT Press.
  • –––, 2002, Foundations of Language: Brain, Meaning, Grammar, Evolution , Oxford: Oxford University Press.
  • Jackson, H., 2002, Lexicography: An Introduction , London: Routledge.
  • Jarema, G. and G. Libben, 2007, “Introduction: Matters of Definition and Core Perspectives”, in G. Jarema and G. Libben (eds.), The Mental Lexicon: Core Perspectives , Amsterdam: Elsevier, 1–6.
  • Jayez, J., 2001, “Underspecification, Context Selection, and Generativity”, in P. Bouillon and F. Busa (eds.), The Language of Word Meaning , Cambridge: Cambridge University Press, 124–148.
  • Jefferies, E. and M.A. Lambon Ralph, 2006, “Semantic Impairment in Stroke Aphasia Versus Semantic Dementia: A Case-Series Comparison”, Brain , 129: 2132–2147.
  • Kaplan, D., 1990, “Words”, Proceedings of the Aristotelian Society, Supplementary Volume , 64: 93–119.
  • –––, 2011, “Words on Words”, Journal of Philosophy , 108: 504–529.
  • Katz, J.J., 1972, Semantic Theory , New York, NY: Harper & Row.
  • –––, 1987, “Common Sense in Semantics”, in E. Lepore and B. Loewer (eds.), New Directions in Semantics , London: Academic Press, 157–233.
  • Katz, J.J. and J.A. Fodor, 1963, “The Structure of a Semantic Theory”, Language , 39: 170–210.
  • Kehler, A., 2002, Coherence, Reference, and the Theory of Grammar , Stanford: CA: CSLI Publications.
  • Kemmerer, D., D. Rudrauf, K. Manzel, and D. Tranel, 2012, “Behavioral Patterns and Lesion Sites Associated with Impaired Processing of Lexical and Conceptual Knowledge of Actions”, Cortex , 48: 826–848.
  • Kennedy, G., 1994, A New History of Classical Rhetoric , Princeton, NJ: Princeton University Press.
  • Kornblith, H., 1980, “Referring to Artifacts”, Philosophical Review , 89: 109–114.
  • Kövecses, Z., 2002, Metaphor: A Practical Introduction , Oxford: Oxford University Press.
  • Kremin H., 1986, “Spared Naming Without Comprehension”, Journal of Neurolinguistics , 2: 131–150.
  • Kripke, S., 1972, “Naming and Necessity”, in Davidson and Harman 1972, 253–355, 763–769. Reprinted later as: 1980, Naming and Necessity , Oxford: Blackwell.
  • Lakoff, G., 1987, Women, Fire and Dangerous Things: What Categories Reveal About the Mind , Chicago, IL: University of Chicago Press.
  • Lakoff, G. and M. Johnson, 1980, Metaphors We Live By , Chicago, IL: University of Chicago Press.
  • Lambon Ralph, M.A., E. Jefferies, K. Patterson, and T.T. Rogers, 2017, “The Neural and Computational Basis of Semantic Cognition”, Nature Reviews Neuroscience , 18 : 42–55.
  • Lang, E., 1993, “The Meaning of German Projective Prepositions: A Two-Level Approach”, in C. Zelinsky-Wibbelt (ed.), The Semantics of Prepositions: From Mental Processing to Natural Language Processing , Berlin: Mouton de Gruyter, 249–291.
  • Lang, E. and C. Maienborn, 2011, “Two-Level Semantics: Semantic Form and Conceptual Structure”, in Maienborn, von Heusinger and Portner 2011: 709–740.
  • Langacker, R., 1987, Foundations of Cognitive Grammar, Volume I , Stanford, CA: Stanford University Press.
  • Lascarides, A. and A. Copestake, 1998, “The Pragmatics of Word Meaning”, Journal of Linguistics , 34: 387–414.
  • Leech, G., 1974, Semantics , Harmondsworth: Penguin.
  • Lehmann, F. (ed.), 1992, Semantic Networks , Special issue of Computers and Mathematics with Applications , 23(2–5).
  • Lehrer, A., 1974, Semantic Fields and Lexical Structure , Amsterdam: Benjamins.
  • Lehrer, A. and E. Kittay (eds.), 1992, Frames, Fields and Contrasts , Hillsdale, NJ: Erlbaum.
  • Lepschy, G.C., 1970, A Survey of Structural Linguistics , London: Faber & Faber.
  • Levelt, W.J.M., 1989, Speaking: From Intention to Articulation , Cambridge, MA: MIT Press.
  • –––, 2001, “Spoken Word Production: A Theory of Lexical Access”, Proceedings of the National Academy of Sciences , 98: 13464–13471.
  • Lewis, D.K., 1972, “General Semantics”, in Davidson and Harman 1972, 169–218.
  • Lieber, R., 2010, Introducing Morphology , Cambridge: Cambridge University Press.
  • Lipka, L., 1992, An Outline of English Lexicology: Lexical Structure, Word Semantics, and Word-Formation , 2 nd edn., Tubingen: Niemeyer.
  • Loar, B., 1981, Mind and Meaning , Cambridge: Cambridge University Press.
  • Ludlow, P., 2014, Living Words: Meaning Underdetermination and the Dynamic Lexicon , Oxford: Oxford University Press.
  • Lyons, J., 1963, Structural Semantics , Oxford: Blackwell.
  • Mahon, B.Z. and A. Caramazza, 2011, “What Drives the Organization of Conceptual Knowledge in the Brain?”, Trends in Cognitive Science , 15: 97–103.
  • Maienborn, C., K. von Heusinger and P. Portner (eds.), 2011, Semantics: An International Handbook of Natural Language Meaning , Vol. 1, Berlin: Mouton de Gruyter.
  • Malkiel, Y., 1993, Etymology , Cambridge: Cambridge University Press.
  • Manning, C. and H. Schütze, 1999, Foundations of Statistical Natural Language Processing , Cambridge, MA: MIT Press.
  • Marconi, D., 1997, Lexical Competence , Cambridge, MA: MIT Press.
  • –––, 2013, “Pencils Have a Point: Against Generalized Externalism About Artifactual Words”, Review of Philosophy and Psychology , 4: 497–513
  • Marconi, D., R. Manenti, E. Catricalà, P.A. Della Rosa, S. Siri, and S.F. Cappa, 2013, “The Neural Substrates of Inferential and Referential Semantic Processing”, Cortex , 49: 2055–2066.
  • Martin, A., 2007, “The Representation of Object Concepts in the Brain”, Annual Review of Psychology , 58: 25–45.
  • Martins, I.P. and L. Farrayota, 2007, “Proper and Common Names: A Double Dissociation”, Neuropsycologia , 47: 1744–1756.
  • Matthews, P.H., 1991, Morphology , 2 nd edn., Cambridge: Cambridge University Press.
  • Matthewson, L., 2003, “Is the Meta‑Language Really Natural?”, Theoretical Linguistics , 29: 263–274.
  • McCulloch, G., 1991, “Making Sense of Words”, Analysis , 51: 73–79.
  • McGinn, C., 1982, “The Structure of Content”, in A. Woodfield (ed.), Thought and Object , Oxford: Clarendon Press, 207–258.
  • McKenna, P. and E.K. Warrington, 1978, “Category-Specific Naming Preservation: A Single Case Study”, Journal of Neurology, Neurosurgery, and Psychiatry , 41: 571–574.
  • Meier-Oeser, S., 2011, “Meaning in Pre-19th Century Thought”, in Maienborn, von Heusinger and Portner 2011: 145–171.
  • Mervis, C.B. and E. Rosch, 1981, “Categorization of Natural Objects”, Annual Review of Psychology , 32: 89–115.
  • Millikan, R., 2005, Language: A Biological Model , Oxford: Oxford University Press.
  • Montague, R., 1974, Formal Philosophy: Selected Papers of Richard Montague , ed. by R.H. Thomason, New Haven, CT and London: Yale University Press.
  • Murphy, G.L., 2002, The Big Book of Concepts , Cambridge, MA: MIT Press.
  • Murphy, M.L., 2003, Semantic Relations and the Lexicon: Antonymy, Synonymy, and Other Paradigms , Cambridge: Cambridge University Press.
  • –––, 2010, Lexical Meaning , Cambridge: Cambridge University Press.
  • Nerlich, B., 1992, Semantic Theories in Europe 1830–1930: From Etymology to Contextuality , Amsterdam: Benjamins.
  • Nerlich, B. and D.D. Clarke, 1996, Language, Action and Context: The Early History of Pragmatics in Europe and America , Amsterdam: Benjamins.
  • –––, 2007, “Cognitive Linguistics and the History of Linguistics”, in D. Geeraerts, H. Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics , Oxford: Oxford University Press, 589–607.
  • Newmeyer, F.J., 1980, Linguistic Theory in America: The First Quarter-Century of Transformational Generative Grammar , New York, NY: Academic Press.
  • Pagin, P., 2006, “Meaning Holism”, in E. Lepore and B.C. Smith (eds.), The Oxford Handbook of Philosophy of Language , Oxford: Oxford University Press, 213–232.
  • Pandey, A.K. and K.M. Heilman, 2014, “Conduction Aphasia with Intact Visual Object Naming”, Cognitive and Behavioral Neurology , 27: 96–101.
  • Partee, B., 1973, “Some Structural Analogies between Tenses and Pronouns in English”, Journal of Phiosophy , 70: 601–609.
  • –––, 1981, “Montague Grammar, Mental Representations, and Reality”, in S. Oehman and S. Kanger (eds.), Philosophy and Grammar , Dordrecht: Reidel, 59–78.
  • Patterson, K., P.J. Nestor, and T.T. Rogers, 2007, “Where Do You Know What You Know? The Representation of Semantic Knowledge in the Human Brain”, Nature Reviews Neuroscience , 8: 976–987.
  • Paul, H., 1920 [1880], Prinzipien der Sprachgeschichte , 5 th edn., Halle: Niemeyer.
  • Pavao Martins, L.P. and L. Farrajota, 2007, “Proper and Common Names: A Double Dissociation”, Neuropsychologia , 45: 1744–1756.
  • Peeters, B. (ed.), 2006, Semantic Primes and Universal Grammar: Empirical Evidence from the Romance Languages , Amsterdam: Benjamins.
  • Perry, J., 1994, “Fodor and Lepore on Holism”, Philosophical Studies , 73: 123–138.
  • Pietroski, P., 2005, “Meaning before Truth”, in G. Preyer and G. Peter (eds.), Contextualism in Philosophy , Oxford: Oxford University Press, 255–302.
  • –––, 2010, “Concepts, Meanings and Truth: First Nature, Second Nature and Hard Work”, Mind & Language , 25: 247–278.
  • Plaut, D.C. and T. Shallice, 1993, “Deep Dyslexia: A Case Study of Connectionist Neuropsychology”, Cognitive Neuropsychology , 10: 377–500.
  • Pulman, S.G., 2005, “Lexical Decomposition: For and Against”, in J.I. Tait (ed.), Charting a New Course: Natural Language Processing and Information Retrieval , Dordrecht: Springer.
  • Pulvermüller, F., 2013, “Semantic Embodiment, Disembodiment or Misembodiment? In Search of Meaning in Modules and Neuron Circuits”, Brain and Language , 127: 86–103.
  • Pustejovsky, J., 1995, The Generative Lexicon , Cambridge, MA: MIT Press.
  • –––, 1998, “Generativity and Explanation in Semantics: A Reply to Fodor and Lepore”, Linguistic Inquiry , 29: 289–311.
  • –––, 2006, “Type Theory and Lexical Decomposition”, Journal of Cognitive Science , 7: 39–76.
  • Pustejovsky, J. and E. Jezek, 2008, “Semantic Coercion in Language: Beyond Distributional Analysis”, Rivista di Linguistica , 20: 175–208.
  • Pustejovsky, J. and A. Rumshisky, 2008, “Between Chaos and Structure: Interpreting Lexical Data Through a Theoretical Lens”, International Journal of Lexicography , 21: 337–355.
  • Pustejovsky, J., P. Bouillon, H. Isahara, K. Kanzaki, and C. Lee (eds.), 2012, Advances in Generative Lexicon Theory , Berlin: Springer.
  • Putnam, H., 1970, “Is Semantics Possible?”, in H. Kiefer and M.K. Munitz (eds.), Language, Belief, and Metaphysics , Albany, NY: SUNY Press, 50–63.
  • –––, 1973, “Meaning and Reference ”, Journal of Philosophy , 70: 699–711.
  • –––, 1975, “The Meaning of ‘Meaning’”, in Mind, Language and Reality , Philosophical Papers Vol. 2 , Cambridge: Cambridge University Press.
  • Quine, W.V.O., 1943, “Notes on Existence and Necessity”, Journal of Philosophy , 40: 113–127.
  • –––, 1951, “Two Dogmas of Empiricism”, Philosophical Review , 60: 20–43.
  • –––, 1986, “Reply to Herbert G. Bohnert”, in L.E. Hahn and P.A. Schilpp (eds.), The Philosophy of W.V.O. Quine , La Salle, IL: Open Court, 93–95.
  • Rapp, B. (ed.), 2001, Handbook of Cognitive Neuropsychology , Philadelphia, PA: Psychology Press.
  • Rapp, B. and M. Goldrick, 2006, “Speaking Words: Contributions of Cognitive Neuropsychological Research”, Cognitive Neuropsychology , 23: 39–73.
  • Recanati, F., 1989, “The Pragmatics of What Is Said”, Mind and Language , 4: 295–329.
  • –––, 1993, Direct Reference , Oxford: Blackwell.
  • –––, 2004, Literal Meaning , Cambridge: Cambridge University Press.
  • Rey, G., 2005, “Mind, Intentionality and Inexistence: An Overview of My Work”, Croatian Journal of Philosophy , 5: 389–415.
  • Riemer, N., 2006, “Reductive Paraphrase and Meaning: A Critique of Wierzbickian Semantics”, Linguistics & Philosophy , 29: 347–379.
  • Rosch, E., 1975, “Cognitive Representation of Semantic Categories”, Journal of Experimental Psychology: General , 104: 192–233.
  • Rosch, E. and C.B. Mervis, 1975, “Family Resemblances: Studies in the Internal Structure of Categories”, Cognitive Psychology , 7: 573–605.
  • Russell, B., 1905, “On Denoting”, Mind , 14: 479–493.
  • Russell, G., 2008, Truth in Virtue of Meaning: A Defence of the Analytic/Synthetic Distinction , Oxford: Oxford University Press.
  • Saffran, E.M. and M.F. Schwartz, 1994, “Impairment of Sentence Comprehension”, Philosophical Transactions of the Royal Society of London, B: Biological Sciences , 346: 47–53.
  • Sainsbury, R.M. and M. Tye, 2012, Seven Puzzles of Thought and How to Solve Them: An Originalist Theory of Concepts , New York: Oxford University Press.
  • Sartori, G., M. Coltheart, M. Miozzo, and R. Job, 1994, “Category Specificity and Informational Specificity in Neuropsychological Impairment of Semantic Memory”, in C. Umiltà and M. Moscovitch (eds.), Attention and Performance , Cambridge, MA: MIT Press, 537–544.
  • Schwartz, S., 1978, “Putnam on Artifacts”, Philosophical Review , 87: 566–574.
  • –––, 1980, “Natural Kinds and Nominal Kinds”, Mind , 89: 182–195.
  • Searle, J., 1979, Expression and Meaning , Cambridge: Cambridge University Press.
  • –––, 1980, “The Background of Meaning”, in J. Searle, F. Kiefer and M. Bierwisch (eds.), Speech Act Theory and Pragmatics , Dordrecht: Reidel, 221–232.
  • Seidenberg, M.S. and J.L. McClelland, 1989, “A Distributed, Developmental Model of Word Recognition and Naming”, Psychological Review , 96: 523–568.
  • Segal, G., 1980, A Slim Book About Narrow Content , Cambridge, MA: MIT Press.
  • Semenza, C., 2006, “Retrieval Pathways for Common and Proper Names”, Cortex , 42: 884–891.
  • –––, 2009, “The Neuropsychology of Proper Names”, Mind & Language , 24: 347–369.
  • Semenza, C. and M.T. Sgaramella, 1993, “Production of Proper Names: A Clinical Case Study of the Effects of Phonemic Cueing”, Memory , 1: 265–280.
  • Shallice T., 1988, From Neuropsychology to Mental Structure , Cambridge: Cambridge University Press.
  • Shallice, T. and R.P. Cooper, 2011, The Organization of Mind , Oxford: Oxford University Press.
  • Simmons, W.K., et al., 2007, “A Common Neural Substrate for Perceiving and Knowing about Color”, Neuropsychologia , 45: 2802–2810.
  • Sinclair, J.M., 2004, Trust the Text: Language, Corpus and Discourse , London: Routledge.
  • Smith, E.E., E.J. Shoben, and L.J. Rips, 1974, “Structure and Process in Semantic Memory: A Featural Model for Semantic Decisions”, Psychological Review , 81: 214–241.
  • Sperber, D. and D. Wilson, 1986, Relevance: Communication and Cognition , Oxford: Blackwell.
  • Spivey, M., K. McRae, and M. F. Joanisse (eds.), 2012, The Cambridge Handbook of Psycholinguistics , Cambridge: Cambridge University Press.
  • Stanley, J., 2007, Language in Context , Oxford: Oxford University Press.
  • Steels, L. and M. Hild (eds.), 2012, Language Grounding in Robots , New York, NY: Springer.
  • Stojanovic, I., 2008, “The Scope and the Subtleties of the Contextualism-Literalism-Relativism Debate”, Language and Linguistics Compass , 2: 1171–1188.
  • Stubbs, M., 2002, Words and Phrases: Corpus Studies of Lexical Semantics , Oxford: Blackwell.
  • Talmy, L., 2000a, Toward a Cognitive Semantics. Volume I: Concept Structuring Systems , Cambridge, MA: MIT Press.
  • –––, 2000b, Toward a Cognitive Semantics. Volume II: Typology and Process in Concept Structuring , Cambridge, MA: MIT Press.
  • Tarski, A., 1933, “Pojecie prawdy w językach nauk dedukcyjnych” [The concept of truth in the languages of deductive sciences], Warsaw 1933. English transl. “The Concept of Truth in Formalized Languages”, in A. Tarski, 1956, Logic, Semantics, Metamathematics , Oxford: Oxford University Press.
  • Taylor, J.R., 1994, “The Two-Level Approach to Meaning”, Linguistische Berichte , 149: 3–26.
  • –––, 1995, “Models of Word Meaning: the Network Model (Langacker) and the Two-Level Model (Bierwisch) in Comparison”, in R. Dirven and J. Vanparys (eds.), Current Approaches to the Lexicon , Frankfurt: Lang, 3–26.
  • –––, 1996, “On Running and Jogging”, Cognitive Linguistics , 7: 21–34.
  • –––, 2002, Cognitive Grammar , Oxford: Oxford University Press.
  • Tettamanti, M., et.al., 2005, “Listening to Action-Related Sentences Activates Fronto-Parietal Motor Circuits”, Journal of Cognitive Neuroscience , 17: 273–281.
  • Thomason, R.H., 1974, “Introduction” to R. Montague, Formal Philosophy: Selected Papers of Richard Montague , New Haven, CT and London: Yale University Press.
  • Thomasson, A., 2007, “Artifacts and Human Concepts”, in E. Margolis and S. Laurence (eds.), Creations of the Mind , Oxford: Oxford University Press, 52–73.
  • Tomaszewki Farias, S., G. Harrington, C. Broomand, and M. Seyal, 2005, “Differences in Functional MR Imaging Activation Patterns Associated with Confrontation Naming and Responsive Naming”, American Journal of Neuroradiology , 26: 2492–2499.
  • Toye, R., 2013, Rhetoric: A Very Short Introduction , Oxford: Oxford University Press.
  • Travis, C., 1975, Saying and Understanding , Oxford: Blackwell.
  • Traugott, E. and R.B. Dasher, 2001, Regularity in Semantic Change , Cambridge: Cambridge University Press.
  • Traxler, M. and M.A. Gernsbacher, 2006, Handbook of Psycholinguistics , 2 nd edn., New York, NY: Academic Press.
  • Trier, J., 1931, Der Deutsche Wortschatz im Sinnbezirk des Verstandes: Die Geschichte eines sprachlichen Feldes I. Von den Anfangen bis zum Beginn des 13. Jhdts. , Heidelberg: Winter.
  • Tulving, E., 1972, “Episodic and Semantic Memory”, in E. Tulving and W. Donaldson (eds.), Organization of Memory , New York, NY: Academic Press, 381–403.
  • Vigneau, M., V. Beaucousin, P.Y. Hervé, H. Duffau, F. Crivello, O. Houdé, B. Mazoyer, and N. Tzourio-Mazoyer, 2006, “Meta-Analyzing Left Hemisphere Language Areas: Phonology, Semantics, and Sentence Processing”, NeuroImage , 30: 1414–1432.
  • Von Fintel, K. and L. Matthewson, 2008, “Universals in Semantics”, The Linguistic Review , 25: 139–201.
  • Waldron, E.J., K. Manzel, and D. Tranel, 2014, “The Left Temporal Pole is a Heteromodal Hub for Retrieving Proper Names”, Frontiers in Bioscience , 6: 50–57.
  • Warrington, E.K., 1975, “The Selective Impairment of Semantic Memory”, Quarterly Journal of Experimental Psychology , 27: 635–657.
  • –––, 1985, “Agnosia: The Impairment of Object Recognition”, in J.A.M. Frederiks (ed.), Clinical Neuropsychology , Amsterdam: Elsevier, 333–349.
  • Warrington, E.K. and M.A. McCarthy, 1983, “Category Specific Access Dysphasia”, Brain , 106: 859–878.
  • Warrington, E.K. and T. Shallice, 1984, “Category Specific Semantic Impairments”, Brain , 107: 829–854.
  • Weiskopf, D.A., 2010, “Embodied Cognition and Linguistic Comprehension”, Studies in the History and Philosophy of Science , 41: 294–304.
  • Wierzbicka, A., 1972, Semantic Primitives , Frankfurt: Athenäum.
  • –––, 1996, Semantics: Primes and Universals , Oxford: Oxford University Press.
  • Williamson, T., 2007, The Philosophy of Philosophy , Oxford: Blackwell.
  • Wittgenstein, L., 1922, Tractatus Logico-Philosophicus , London: Routledge & Kegan Paul.
  • Wolff, P., 2003, “Direct Causation in the Linguistic Coding and Individuation of Causal Events”, Cognition , 88: 1–48.
  • Wunderlich, D., 1991, “How Do Prepositional Phrases Fit Into Compositional Syntax and Semantics?”, Linguistics , 29: 591–621.
  • –––, 1993, “On German Um : Semantic and Conceptual Aspects”, Linguistics , 31: 111–133.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Google Ngram Viewer
  • Philpapers: Philosophy of Language
  • Philpapers: Lexical Semantics
  • Semantics Archive

ambiguity | analytic/synthetic distinction | artifact | assertion | belief | cognition: embodied | compositionality | concepts | convention | descriptions | externalism about the mind | implicature | indexicals | logical form | meaning, theories of | meaning: normativity of | mental content: narrow | names | natural kinds | pragmatics | presupposition | propositional attitude reports | propositions | quantifiers and quantification | reference | rigid designators | semantics: two-dimensional | speech acts | vagueness

Copyright © 2019 by Luca Gasparri < luca . gasparri @ cnrs . fr > Diego Marconi < diego . marconi @ unito . it >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Daily Crossword
  • Word Puzzle
  • Word Finder
  • Word of the Day
  • Synonym of the Day
  • Word of the Year
  • Language stories
  • All featured
  • Gender and sexuality
  • All pop culture
  • Grammar Coach ™
  • Writing hub
  • Grammar essentials
  • Commonly confused
  • All writing tips
  • Pop culture
  • Writing tips

the faculty or power of speaking ; oral communication; ability to express one's thoughts and emotions by speech sounds and gesture: Losing her speech made her feel isolated from humanity.

the act of speaking: He expresses himself better in speech than in writing.

something that is spoken ; an utterance, remark, or declaration: We waited for some speech that would indicate her true feelings.

a form of communication in spoken language, made by a speaker before an audience for a given purpose: a fiery speech.

any single utterance of an actor in the course of a play, motion picture, etc.

the form of utterance characteristic of a particular people or region; a language or dialect.

manner of speaking, as of a person: Your slovenly speech is holding back your career.

a field of study devoted to the theory and practice of oral communication.

Archaic . rumor .

Origin of speech

Synonym study for speech, other words for speech, other words from speech.

  • self-speech, noun

Words Nearby speech

  • speculum metal
  • speech center
  • speech clinic
  • speech community
  • speech correction

Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2024

How to use speech in a sentence

Kids are interacting with Alexas that can record their voice data and influence their speech and social development.

The attorney general delivered a controversial speech Wednesday.

For example, my company, Teknicks, is working with an online K-12 speech and occupational therapy provider.

Instead, it would give tech companies a powerful incentive to limit Brazilians’ freedom of speech at a time of political unrest.

However, the president did give a speech in Suresnes, France, the next day during a ceremony hosted by the American Battle Monuments Commission.

Those are troubling numbers, for unfettered speech is not incidental to a flourishing society.

There is no such thing as speech so hateful or offensive it somehow “justifies” or “legitimizes” the use of violence.

We need to recover and grow the idea that the proper answer to bad speech is more and better speech .

Tend to your own garden, to quote the great sage of free speech , Voltaire, and invite people to follow your example.

The simple, awful truth is that free speech has never been particularly popular in America.

Alessandro turned a grateful look on Ramona as he translated this speech , so in unison with Indian modes of thought and feeling.

And so this is why the clever performer cannot reproduce the effect of a speech of Demosthenes or Daniel Webster.

He said no more in words, but his little blue eyes had an eloquence that left nothing to mere speech .

After pondering over Mr. Blackbird's speech for a few moments he raised his head.

Albinia, I have refrained from speech as long as possible; but this is really too much!

British Dictionary definitions for speech

/ ( spiːtʃ ) /

the act or faculty of speaking, esp as possessed by persons : to have speech with somebody

( as modifier ) : speech therapy

that which is spoken; utterance

a talk or address delivered to an audience

a person's characteristic manner of speaking

a national or regional language or dialect

linguistics another word for parole (def. 5)

Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012

  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of word

 (Entry 1 of 2)

Definition of word  (Entry 2 of 2)

transitive verb

intransitive verb

Did you know?

What makes a word a real word?

The word word has a wide range of meanings and uses in English. Yet one of the most often looked for pieces of information regarding word is not something that would be found in its definition. Instead, it is some variant of the question, What makes a word a real word?

One of the most prolific areas of change and variation in English is vocabulary; new words are constantly being coined to name or describe new inventions or innovations, or to better identify aspects of our rapidly changing world. Constraints of time, money, and staff would make it impossible for any dictionary, no matter how large, to capture a fully comprehensive account of all the words in the language. And even if such a leviathan reference was somehow fashioned, the dictionary would be obsolete the instant it was published as speakers and writers continued generating new terms to meet their constantly changing needs.

Most general English dictionaries are designed to include only those words that meet certain criteria of usage across wide areas and over extended periods of time (for more details about how words are chosen for dictionary entry, read "How does a word get into a Merriam-Webster dictionary?" in our FAQ). As a result, they may omit words that are still in the process of becoming established, those that are too highly specialized, or those that are so informal that they are rarely documented in professionally edited writing. But the words left out are as real as those that gain entry; the former simply haven't met the criteria for dictionary entry–at least not yet (newer ones may ultimately gain admission to the dictionary's pages if they gain sufficient use).

However, in preparing your own writings, it is worth remembering that the dictionary encompasses the most widely used terms in English. Words that are left out may have usage limited to specific, isolated, or informal contexts, so they should be used carefully.

Examples of word in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'word.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Middle English, from Old English; akin to Old High German wort word, Latin verbum , Greek eirein to say, speak, Hittite weriya- to call, name

before the 12th century, in the meaning defined at sense 2b

13th century, in the meaning defined at intransitive sense

Phrases Containing word

  • a word to the wise
  • a brief word
  • a word of warning
  • a word in someone's ear
  • by word of mouth
  • function word
  • content word
  • have the last word
  • hang on someone's every word
  • have a word in your ear
  • have a word with (someone)
  • give (someone) one's word
  • go back on one's word
  • keep one's word
  • in the strict / strictest sense (of the word)
  • in every sense of the word
  • mum's the word
  • not say / breathe a word
  • of one's word
  • someone's word is law
  • say the word
  • someone's word is his / her bond
  • put in a good word
  • put / get the word out
  • printed page / word
  • true to one's word
  • take someone's word for it
  • word - association test
  • what's the good word
  • weasel word
  • word - mongering
  • word - hoard
  • word - perfect
  • word on the street
  • word of mouth
  • word of honor
  • word processor
  • word problem
  • word processing
  • word / rumor has it
  • four - letter word
  • man / woman of his / her word
  • fighting word
  • not speak a word of
  • the last / final word
  • my word is my bond
  • from the word go
  • word square
  • word for word
  • spread the word
  • not utter a word
  • not speak a word
  • word stress
  • know the meaning of the word
  • the n - word
  • portmanteau word
  • upon my word

Articles Related to word

brutalize

'Handsome,' 'Geek,' and 8 More Words...

'Handsome,' 'Geek,' and 8 More Words That Changed Their Meanings

Language evolves

Dictionary Entries Near word

Worcestershire sauce

Cite this Entry

“Word.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/word. Accessed 27 Mar. 2024.

Kids Definition

Kids definition of word.

Kids Definition of word  (Entry 2 of 2)

More from Merriam-Webster on word

Nglish: Translation of word for Spanish Speakers

Britannica English: Translation of word for Arabic Speakers

Britannica.com: Encyclopedia article about word

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

8 grammar terms you used to know, but forgot, homophones, homographs, and homonyms, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, popular in wordplay, the words of the week - mar. 22, 12 words for signs of spring, 9 superb owl words, 'gaslighting,' 'woke,' 'democracy,' and other top lookups, fan favorites: your most liked words of the day 2023, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

  • Dictionaries home
  • American English
  • Collocations
  • German-English
  • Grammar home
  • Practical English Usage
  • Learn & Practise Grammar (Beta)
  • Word Lists home
  • My Word Lists
  • Recent additions
  • Resources home
  • Text Checker

Definition of speech noun from the Oxford Advanced American Dictionary

Definitions on the go

Look up any word in the dictionary offline, anytime, anywhere with the Oxford Advanced Learner’s Dictionary app.

speech of word meaning

A part of speech (also called a word class ) is a category that describes the role a word plays in a sentence. Understanding the different parts of speech can help you analyze how words function in a sentence and improve your writing.

The parts of speech are classified differently in different grammars, but most traditional grammars list eight parts of speech in English: nouns , pronouns , verbs , adjectives , adverbs , prepositions , conjunctions , and interjections . Some modern grammars add others, such as determiners and articles .

Many words can function as different parts of speech depending on how they are used. For example, “laugh” can be a noun (e.g., “I like your laugh”) or a verb (e.g., “don’t laugh”).

Table of contents

  • Prepositions
  • Conjunctions
  • Interjections

Other parts of speech

Interesting language articles, frequently asked questions.

A noun is a word that refers to a person, concept, place, or thing. Nouns can act as the subject of a sentence (i.e., the person or thing performing the action) or as the object of a verb (i.e., the person or thing affected by the action).

There are numerous types of nouns, including common nouns (used to refer to nonspecific people, concepts, places, or things), proper nouns (used to refer to specific people, concepts, places, or things), and collective nouns (used to refer to a group of people or things).

Ella lives in France .

Other types of nouns include countable and uncountable nouns , concrete nouns , abstract nouns , and gerunds .

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

Fix mistakes for free

A pronoun is a word used in place of a noun. Pronouns typically refer back to an antecedent (a previously mentioned noun) and must demonstrate correct pronoun-antecedent agreement . Like nouns, pronouns can refer to people, places, concepts, and things.

There are numerous types of pronouns, including personal pronouns (used in place of the proper name of a person), demonstrative pronouns (used to refer to specific things and indicate their relative position), and interrogative pronouns (used to introduce questions about things, people, and ownership).

That is a horrible painting!

A verb is a word that describes an action (e.g., “jump”), occurrence (e.g., “become”), or state of being (e.g., “exist”). Verbs indicate what the subject of a sentence is doing. Every complete sentence must contain at least one verb.

Verbs can change form depending on subject (e.g., first person singular), tense (e.g., simple past), mood (e.g., interrogative), and voice (e.g., passive voice ).

Regular verbs are verbs whose simple past and past participle are formed by adding“-ed” to the end of the word (or “-d” if the word already ends in “e”). Irregular verbs are verbs whose simple past and past participles are formed in some other way.

“I’ve already checked twice.”

“I heard that you used to sing .”

Other types of verbs include auxiliary verbs , linking verbs , modal verbs , and phrasal verbs .

An adjective is a word that describes a noun or pronoun. Adjectives can be attributive , appearing before a noun (e.g., “a red hat”), or predicative , appearing after a noun with the use of a linking verb like “to be” (e.g., “the hat is red ”).

Adjectives can also have a comparative function. Comparative adjectives compare two or more things. Superlative adjectives describe something as having the most or least of a specific characteristic.

Other types of adjectives include coordinate adjectives , participial adjectives , and denominal adjectives .

An adverb is a word that can modify a verb, adjective, adverb, or sentence. Adverbs are often formed by adding “-ly” to the end of an adjective (e.g., “slow” becomes “slowly”), although not all adverbs have this ending, and not all words with this ending are adverbs.

There are numerous types of adverbs, including adverbs of manner (used to describe how something occurs), adverbs of degree (used to indicate extent or degree), and adverbs of place (used to describe the location of an action or event).

Talia writes quite quickly.

Other types of adverbs include adverbs of frequency , adverbs of purpose , focusing adverbs , and adverbial phrases .

A preposition is a word (e.g., “at”) or phrase (e.g., “on top of”) used to show the relationship between the different parts of a sentence. Prepositions can be used to indicate aspects such as time , place , and direction .

I left the cup on the kitchen counter.

A conjunction is a word used to connect different parts of a sentence (e.g., words, phrases, or clauses).

The main types of conjunctions are coordinating conjunctions (used to connect items that are grammatically equal), subordinating conjunctions (used to introduce a dependent clause), and correlative conjunctions (used in pairs to join grammatically equal parts of a sentence).

You can choose what movie we watch because I chose the last time.

An interjection is a word or phrase used to express a feeling, give a command, or greet someone. Interjections are a grammatically independent part of speech, so they can often be excluded from a sentence without affecting the meaning.

Types of interjections include volitive interjections (used to make a demand or request), emotive interjections (used to express a feeling or reaction), cognitive interjections (used to indicate thoughts), and greetings and parting words (used at the beginning and end of a conversation).

Ouch ! I hurt my arm.

I’m, um , not sure.

The traditional classification of English words into eight parts of speech is by no means the only one or the objective truth. Grammarians have often divided them into more or fewer classes. Other commonly mentioned parts of speech include determiners and articles.

  • Determiners

A determiner is a word that describes a noun by indicating quantity, possession, or relative position.

Common types of determiners include demonstrative determiners (used to indicate the relative position of a noun), possessive determiners (used to describe ownership), and quantifiers (used to indicate the quantity of a noun).

My brother is selling his old car.

Other types of determiners include distributive determiners , determiners of difference , and numbers .

An article is a word that modifies a noun by indicating whether it is specific or general.

  • The definite article the is used to refer to a specific version of a noun. The can be used with all countable and uncountable nouns (e.g., “the door,” “the energy,” “the mountains”).
  • The indefinite articles a and an refer to general or unspecific nouns. The indefinite articles can only be used with singular countable nouns (e.g., “a poster,” “an engine”).

There’s a concert this weekend.

If you want to know more about nouns , pronouns , verbs , and other parts of speech, make sure to check out some of our language articles with explanations and examples.

Nouns & pronouns

  • Common nouns
  • Proper nouns
  • Collective nouns
  • Personal pronouns
  • Uncountable and countable nouns
  • Verb tenses
  • Phrasal verbs
  • Types of verbs
  • Active vs passive voice
  • Subject-verb agreement

A is an indefinite article (along with an ). While articles can be classed as their own part of speech, they’re also considered a type of determiner .

The indefinite articles are used to introduce nonspecific countable nouns (e.g., “a dog,” “an island”).

In is primarily classed as a preposition, but it can be classed as various other parts of speech, depending on how it is used:

  • Preposition (e.g., “ in the field”)
  • Noun (e.g., “I have an in with that company”)
  • Adjective (e.g., “Tim is part of the in crowd”)
  • Adverb (e.g., “Will you be in this evening?”)

As a part of speech, and is classed as a conjunction . Specifically, it’s a coordinating conjunction .

And can be used to connect grammatically equal parts of a sentence, such as two nouns (e.g., “a cup and plate”), or two adjectives (e.g., “strong and smart”). And can also be used to connect phrases and clauses.

Is this article helpful?

Other students also liked, what is a collective noun | examples & definition.

  • What Is an Adjective? | Definition, Types & Examples
  • Using Conjunctions | Definition, Rules & Examples

More interesting articles

  • Definite and Indefinite Articles | When to Use "The", "A" or "An"
  • Ending a Sentence with a Preposition | Examples & Tips
  • What Are Prepositions? | List, Examples & How to Use
  • What Is a Determiner? | Definition, Types & Examples
  • What Is an Adverb? Definition, Types & Examples
  • What Is an Interjection? | Examples, Definition & Types

Unlimited Academic AI-Proofreading

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, understanding the 8 parts of speech: definitions and examples.

author image

General Education

feature-parts-of-speech-sentence-map

If you’re trying to learn the grammatical rules of English, you’ve probably been asked to learn the parts of speech. But what are parts of speech and how many are there? How do you know which words are classified in each part of speech? 

The answers to these questions can be a bit complicated—English is a difficult language to learn and understand. Don’t fret, though! We’re going to answer each of these questions for you with a full guide to the parts of speech that explains the following: 

  • What the parts of speech are, including a comprehensive parts of speech list
  • Parts of speech definitions for the individual parts of speech. (If you’re looking for information on a specific part of speech, you can search for it by pressing Command + F, then typing in the part of speech you’re interested in.) 
  • Parts of speech examples
  • A ten question quiz covering parts of speech definitions and parts of speech examples

We’ve got a lot to cover, so let’s begin!

Feature Image: (Gavina S / Wikimedia Commons)

body-woman-question-marks

What Are Parts of Speech? 

The parts of speech definitions in English can vary, but here’s a widely accepted one: a part of speech is a category of words that serve a similar grammatical purpose in sentences.  

To make that definition even simpler, a part of speech is just a category for similar types of words . All of the types of words included under a single part of speech function in similar ways when they’re used properly in sentences.

In the English language, it’s commonly accepted that there are 8 parts of speech: nouns, verbs, adjectives, adverbs, pronouns, conjunctions, interjections, and prepositions. Each of these categories plays a different role in communicating meaning in the English language. Each of the eight parts of speech—which we might also call the “main classes” of speech—also have subclasses. In other words, we can think of each of the eight parts of speech as being general categories for different types within their part of speech . There are different types of nouns, different types of verbs, different types of adjectives, adverbs, pronouns...you get the idea. 

And that’s an overview of what a part of speech is! Next, we’ll explain each of the 8 parts of speech—definitions and examples included for each category. 

body-people-drinking-coffee-with-dog

There are tons of nouns in this picture. Can you find them all? 

Nouns are a class of words that refer, generally, to people and living creatures, objects, events, ideas, states of being, places, and actions. You’ve probably heard English nouns referred to as “persons, places, or things.” That definition is a little simplistic, though—while nouns do include people, places, and things, “things” is kind of a vague term. I t’s important to recognize that “things” can include physical things—like objects or belongings—and nonphysical, abstract things—like ideas, states of existence, and actions. 

Since there are many different types of nouns, we’ll include several examples of nouns used in a sentence while we break down the subclasses of nouns next!

Subclasses of Nouns, Including Examples

As an open class of words, the category of “nouns” has a lot of subclasses. The most common and important subclasses of nouns are common nouns, proper nouns, concrete nouns, abstract nouns, collective nouns, and count and mass nouns. Let’s break down each of these subclasses!

Common Nouns and Proper Nouns

Common nouns are generic nouns—they don’t name specific items. They refer to people (the man, the woman), living creatures (cat, bird), objects (pen, computer, car), events (party, work), ideas (culture, freedom), states of being (beauty, integrity), and places (home, neighborhood, country) in a general way. 

Proper nouns are sort of the counterpart to common nouns. Proper nouns refer to specific people, places, events, or ideas. Names are the most obvious example of proper nouns, like in these two examples: 

Common noun: What state are you from?

Proper noun: I’m from Arizona .

Whereas “state” is a common noun, Arizona is a proper noun since it refers to a specific state. Whereas “the election” is a common noun, “Election Day” is a proper noun. Another way to pick out proper nouns: the first letter is often capitalized. If you’d capitalize the word in a sentence, it’s almost always a proper noun. 

Concrete Nouns and Abstract Nouns

Concrete nouns are nouns that can be identified through the five senses. Concrete nouns include people, living creatures, objects, and places, since these things can be sensed in the physical world. In contrast to concrete nouns, abstract nouns are nouns that identify ideas, qualities, concepts, experiences, or states of being. Abstract nouns cannot be detected by the five senses. Here’s an example of concrete and abstract nouns used in a sentence: 

Concrete noun: Could you please fix the weedeater and mow the lawn ?

Abstract noun: Aliyah was delighted to have the freedom to enjoy the art show in peace .

See the difference? A weedeater and the lawn are physical objects or things, and freedom and peace are not physical objects, though they’re “things” people experience! Despite those differences, they all count as nouns. 

Collective Nouns, Count Nouns, and Mass Nouns

Nouns are often categorized based on number and amount. Collective nouns are nouns that refer to a group of something—often groups of people or a type of animal. Team , crowd , and herd are all examples of collective nouns. 

Count nouns are nouns that can appear in the singular or plural form, can be modified by numbers, and can be described by quantifying determiners (e.g. many, most, more, several). For example, “bug” is a count noun. It can occur in singular form if you say, “There is a bug in the kitchen,” but it can also occur in the plural form if you say, “There are many bugs in the kitchen.” (In the case of the latter, you’d call an exterminator...which is an example of a common noun!) Any noun that can accurately occur in one of these singular or plural forms is a count noun. 

Mass nouns are another type of noun that involve numbers and amount. Mass nouns are nouns that usually can’t be pluralized, counted, or quantified and still make sense grammatically. “Charisma” is an example of a mass noun (and an abstract noun!). For example, you could say, “They’ve got charisma, ” which doesn’t imply a specific amount. You couldn’t say, “They’ve got six charismas, ” or, “They’ve got several charismas .” It just doesn’t make sense! 

body-people-running-relay-race

Verbs are all about action...just like these runners. 

A verb is a part of speech that, when used in a sentence, communicates an action, an occurrence, or a state of being . In sentences, verbs are the most important part of the predicate, which explains or describes what the subject of the sentence is doing or how they are being. And, guess what? All sentences contain verbs!

There are many words in the English language that are classified as verbs. A few common verbs include the words run, sing, cook, talk, and clean. These words are all verbs because they communicate an action performed by a living being. We’ll look at more specific examples of verbs as we discuss the subclasses of verbs next!

Subclasses of Verbs, Including Examples

Like nouns, verbs have several subclasses. The subclasses of verbs include copular or linking verbs, intransitive verbs, transitive verbs, and ditransitive or double transitive verbs. Let’s dive into these subclasses of verbs!

Copular or Linking Verbs

Copular verbs, or linking verbs, are verbs that link a subject with its complement in a sentence. The most familiar linking verb is probably be. Here’s a list of other common copular verbs in English: act, be, become, feel, grow, seem, smell, and taste. 

So how do copular verbs work? Well, in a sentence, if we said, “Michi is ,” and left it at that, it wouldn’t make any sense. “Michi,” the subject, needs to be connected to a complement by the copular verb “is.” Instead, we could say, “Michi is leaving.” In that instance, is links the subject of the sentence to its complement. 

Transitive Verbs, Intransitive Verbs, and Ditransitive Verbs

Transitive verbs are verbs that affect or act upon an object. When unattached to an object in a sentence, a transitive verb does not make sense. Here’s an example of a transitive verb attached to (and appearing before) an object in a sentence: 

Please take the clothes to the dry cleaners.

In this example, “take” is a transitive verb because it requires an object—”the clothes”—to make sense. “The clothes” are the objects being taken. “Please take” wouldn’t make sense by itself, would it? That’s because the transitive verb “take,” like all transitive verbs, transfers its action onto another being or object. 

Conversely, intransitive verbs don’t require an object to act upon in order to make sense in a sentence. These verbs make sense all on their own! For instance, “They ran ,” “We arrived ,” and, “The car stopped ” are all examples of sentences that contain intransitive verbs. 

Finally, ditransitive verbs, or double transitive verbs, are a bit more complicated. Ditransitive verbs are verbs that are followed by two objects in a sentence . One of the objects has the action of the ditransitive verb done to it, and the other object has the action of the ditransitive verb directed towards it. Here’s an example of what that means in a sentence: 

I cooked Nathan a meal.

In this example, “cooked” is a ditransitive verb because it modifies two objects: Nathan and meal . The meal has the action of “cooked” done to it, and “Nathan” has the action of the verb directed towards him. 

body-rainbow-colored-chalk

Adjectives are descriptors that help us better understand a sentence. A common adjective type is color.

#3: Adjectives

Here’s the simplest definition of adjectives: adjectives are words that describe other words . Specifically, adjectives modify nouns and noun phrases. In sentences, adjectives appear before nouns and pronouns (they have to appear before the words they describe!). 

Adjectives give more detail to nouns and pronouns by describing how a noun looks, smells, tastes, sounds, or feels, or its state of being or existence. . For example, you could say, “The girl rode her bike.” That sentence doesn’t have any adjectives in it, but you could add an adjective before both of the nouns in the sentence—”girl” and “bike”—to give more detail to the sentence. It might read like this: “The young girl rode her red bike.”   You can pick out adjectives in a sentence by asking the following questions: 

  • Which one? 
  • What kind? 
  • How many? 
  • Whose’s? 

We’ll look at more examples of adjectives as we explore the subclasses of adjectives next!

Subclasses of Adjectives, Including Examples

Subclasses of adjectives include adjective phrases, comparative adjectives, superlative adjectives, and determiners (which include articles, possessive adjectives, and demonstratives). 

Adjective Phrases

An adjective phrase is a group of words that describe a noun or noun phrase in a sentence. Adjective phrases can appear before the noun or noun phrase in a sentence, like in this example: 

The extremely fragile vase somehow did not break during the move.

In this case, extremely fragile describes the vase. On the other hand, adjective phrases can appear after the noun or noun phrase in a sentence as well: 

The museum was somewhat boring. 

Again, the phrase somewhat boring describes the museum. The takeaway is this: adjective phrases describe the subject of a sentence with greater detail than an individual adjective. 

Comparative Adjectives and Superlative Adjectives

Comparative adjectives are used in sentences where two nouns are compared. They function to compare the differences between the two nouns that they modify. In sentences, comparative adjectives often appear in this pattern and typically end with -er. If we were to describe how comparative adjectives function as a formula, it might look something like this: 

Noun (subject) + verb + comparative adjective + than + noun (object).

Here’s an example of how a comparative adjective would work in that type of sentence: 

The horse was faster than the dog.

The adjective faster compares the speed of the horse to the speed of the dog. Other common comparative adjectives include words that compare distance ( higher, lower, farther ), age ( younger, older ), size and dimensions ( bigger, smaller, wider, taller, shorter ), and quality or feeling ( better, cleaner, happier, angrier ). 

Superlative adjectives are adjectives that describe the extremes of a quality that applies to a subject being compared to a group of objects . Put more simply, superlative adjectives help show how extreme something is. In sentences, superlative adjectives usually appear in this structure and end in -est : 

Noun (subject) + verb + the + superlative adjective + noun (object).

Here’s an example of a superlative adjective that appears in that type of sentence: 

Their story was the funniest story. 

In this example, the subject— story —is being compared to a group of objects—other stories. The superlative adjective “funniest” implies that this particular story is the funniest out of all the stories ever, period. Other common superlative adjectives are best, worst, craziest, and happiest... though there are many more than that! 

It’s also important to know that you can often omit the object from the end of the sentence when using superlative adjectives, like this: “Their story was the funniest.” We still know that “their story” is being compared to other stories without the object at the end of the sentence.

Determiners

The last subclass of adjectives we want to look at are determiners. Determiners are words that determine what kind of reference a noun or noun phrase makes. These words are placed in front of nouns to make it clear what the noun is referring to. Determiners are an example of a part of speech subclass that contains a lot of subclasses of its own. Here is a list of the different types of determiners: 

  • Definite article: the
  • Indefinite articles : a, an 
  • Demonstratives: this, that, these, those
  • Pronouns and possessive determiners: my, your, his, her, its, our, their
  • Quantifiers : a little, a few, many, much, most, some, any, enough
  • Numbers: one, twenty, fifty
  • Distributives: all, both, half, either, neither, each, every
  • Difference words : other, another
  • Pre-determiners: such, what, rather, quite

Here are some examples of how determiners can be used in sentences: 

Definite article: Get in the car.  

Demonstrative: Could you hand me that magazine?  

Possessive determiner: Please put away your clothes. 

Distributive: He ate all of the pie. 

Though some of the words above might not seem descriptive, they actually do describe the specificity and definiteness, relationship, and quantity or amount of a noun or noun phrase. For example, the definite article “the” (a type of determiner) indicates that a noun refers to a specific thing or entity. The indefinite article “an,” on the other hand, indicates that a noun refers to a nonspecific entity. 

One quick note, since English is always more complicated than it seems: while articles are most commonly classified as adjectives, they can also function as adverbs in specific situations, too. Not only that, some people are taught that determiners are their own part of speech...which means that some people are taught there are 9 parts of speech instead of 8! 

It can be a little confusing, which is why we have a whole article explaining how articles function as a part of speech to help clear things up . 

body_time-11

Adverbs can be used to answer questions like "when?" and "how long?"

Adverbs are words that modify verbs, adjectives (including determiners), clauses, prepositions, and sentences. Adverbs typically answer the questions how?, in what way?, when?, where?, and to what extent? In answering these questions, adverbs function to express frequency, degree, manner, time, place, and level of certainty . Adverbs can answer these questions in the form of single words, or in the form of adverbial phrases or adverbial clauses. 

Adverbs are commonly known for being words that end in -ly, but there’s actually a bit more to adverbs than that, which we’ll dive into while we look at the subclasses of adverbs!

Subclasses Of Adverbs, Including Examples

There are many types of adverbs, but the main subclasses we’ll look at are conjunctive adverbs, and adverbs of place, time, manner, degree, and frequency. 

Conjunctive Adverbs

Conjunctive adverbs look like coordinating conjunctions (which we’ll talk about later!), but they are actually their own category: conjunctive adverbs are words that connect independent clauses into a single sentence . These adverbs appear after a semicolon and before a comma in sentences, like in these two examples: 

She was exhausted; nevertheless , she went for a five mile run. 

They didn’t call; instead , they texted.  

Though conjunctive adverbs are frequently used to create shorter sentences using a semicolon and comma, they can also appear at the beginning of sentences, like this: 

He chopped the vegetables. Meanwhile, I boiled the pasta.  

One thing to keep in mind is that conjunctive adverbs come with a comma. When you use them, be sure to include a comma afterward! 

There are a lot of conjunctive adverbs, but some common ones include also, anyway, besides, finally, further, however, indeed, instead, meanwhile, nevertheless, next, nonetheless, now, otherwise, similarly, then, therefore, and thus.  

Adverbs of Place, Time, Manner, Degree, and Frequency

There are also adverbs of place, time, manner, degree, and frequency. Each of these types of adverbs express a different kind of meaning. 

Adverbs of place express where an action is done or where an event occurs. These are used after the verb, direct object, or at the end of a sentence. A sentence like “She walked outside to watch the sunset” uses outside as an adverb of place. 

Adverbs of time explain when something happens. These adverbs are used at the beginning or at the end of sentences. In a sentence like “The game should be over soon,” soon functions as an adverb of time. 

Adverbs of manner describe the way in which something is done or how something happens. These are the adverbs that usually end in the familiar -ly.  If we were to write “She quickly finished her homework,” quickly is an adverb of manner. 

Adverbs of degree tell us the extent to which something happens or occurs. If we were to say “The play was quite interesting,” quite tells us the extent of how interesting the play was. Thus, quite is an adverb of degree.  

Finally, adverbs of frequency express how often something happens . In a sentence like “They never know what to do with themselves,” never is an adverb of frequency. 

Five subclasses of adverbs is a lot, so we’ve organized the words that fall under each category in a nifty table for you here: 

It’s important to know about these subclasses of adverbs because many of them don’t follow the old adage that adverbs end in -ly. 

body-pronoun-chart

Here's a helpful list of pronouns. (Attanata / Flickr )

#5: Pronouns

Pronouns are words that can be substituted for a noun or noun phrase in a sentence . Pronouns function to make sentences less clunky by allowing people to avoid repeating nouns over and over. For example, if you were telling someone a story about your friend Destiny, you wouldn’t keep repeating their name over and over again every time you referred to them. Instead, you’d use a pronoun—like they or them—to refer to Destiny throughout the story. 

Pronouns are typically short words, often only two or three letters long. The most familiar pronouns in the English language are they, she, and he. But these aren’t the only pronouns. There are many more pronouns in English that fall under different subclasses!

Subclasses of Pronouns, Including Examples

There are many subclasses of pronouns, but the most commonly used subclasses are personal pronouns, possessive pronouns, demonstrative pronouns, indefinite pronouns, and interrogative pronouns. 

Personal Pronouns

Personal pronouns are probably the most familiar type of pronoun. Personal pronouns include I, me, you, she, her, him, he, we, us, they, and them. These are called personal pronouns because they refer to a person! Personal pronouns can replace specific nouns in sentences, like a person’s name, or refer to specific groups of people, like in these examples: 

Did you see Gia pole vault at the track meet? Her form was incredible!

The Cycling Club is meeting up at six. They said they would be at the park. 

In both of the examples above, a pronoun stands in for a proper noun to avoid repetitiveness. Her replaces Gia in the first example, and they replaces the Cycling Club in the second example. 

(It’s also worth noting that personal pronouns are one of the easiest ways to determine what point of view a writer is using.) 

Possessive Pronouns

Possessive pronouns are used to indicate that something belongs to or is the possession of someone. The possessive pronouns fall into two categories: limiting and absolute. In a sentence, absolute possessive pronouns can be substituted for the thing that belongs to a person, and limiting pronouns cannot. 

The limiting pronouns are my, your, its, his, her, our, their, and whose, and the absolute pronouns are mine, yours, his, hers, ours, and theirs . Here are examples of a limiting possessive pronoun and absolute possessive pronoun used in a sentence: 

Limiting possessive pronoun: Juan is fixing his car. 

In the example above, the car belongs to Juan, and his is the limiting possessive pronoun that shows the car belongs to Juan. Now, here’s an example of an absolute pronoun in a sentence: 

Absolute possessive pronoun: Did you buy your tickets ? We already bought ours . 

In this example, the tickets belong to whoever we is, and in the second sentence, ours is the absolute possessive pronoun standing in for the thing that “we” possess—the tickets. 

Demonstrative Pronouns, Interrogative Pronouns, and Indefinite Pronouns

Demonstrative pronouns include the words that, this, these, and those. These pronouns stand in for a noun or noun phrase that has already been mentioned in a sentence or conversation. This and these are typically used to refer to objects or entities that are nearby distance-wise, and that and those usually refer to objects or entities that are farther away. Here’s an example of a demonstrative pronoun used in a sentence: 

The books are stacked up in the garage. Can you put those away? 

The books have already been mentioned, and those is the demonstrative pronoun that stands in to refer to them in the second sentence above. The use of those indicates that the books aren’t nearby—they’re out in the garage. Here’s another example: 

Do you need shoes? Here...you can borrow these. 

In this sentence, these refers to the noun shoes. Using the word these tells readers that the shoes are nearby...maybe even on the speaker’s feet! 

Indefinite pronouns are used when it isn’t necessary to identify a specific person or thing . The indefinite pronouns are one, other, none, some, anybody, everybody, and no one. Here’s one example of an indefinite pronoun used in a sentence: 

Promise you can keep a secret? 

Of course. I won’t tell anyone. 

In this example, the person speaking in the second two sentences isn’t referring to any particular people who they won’t tell the secret to. They’re saying that, in general, they won’t tell anyone . That doesn’t specify a specific number, type, or category of people who they won’t tell the secret to, which is what makes the pronoun indefinite. 

Finally, interrogative pronouns are used in questions, and these pronouns include who, what, which, and whose. These pronouns are simply used to gather information about specific nouns—persons, places, and ideas. Let’s look at two examples of interrogative pronouns used in sentences: 

Do you remember which glass was mine? 

What time are they arriving? 

In the first glass, the speaker wants to know more about which glass belongs to whom. In the second sentence, the speaker is asking for more clarity about a specific time. 

body-puzzle-pieces

Conjunctions hook phrases and clauses together so they fit like pieces of a puzzle.

#6: Conjunctions

Conjunctions are words that are used to connect words, phrases, clauses, and sentences in the English language. This function allows conjunctions to connect actions, ideas, and thoughts as well. Conjunctions are also used to make lists within sentences. (Conjunctions are also probably the most famous part of speech, since they were immortalized in the famous “Conjunction Junction” song from Schoolhouse Rock .) 

You’re probably familiar with and, but, and or as conjunctions, but let’s look into some subclasses of conjunctions so you can learn about the array of conjunctions that are out there!

Subclasses of Conjunctions, Including Examples

Coordinating conjunctions, subordinating conjunctions, and correlative conjunctions are three subclasses of conjunctions. Each of these types of conjunctions functions in a different way in sentences!

Coordinating Conjunctions

Coordinating conjunctions are probably the most familiar type of conjunction. These conjunctions include the words for, and, nor, but, or, yet, so (people often recommend using the acronym FANBOYS to remember the seven coordinating conjunctions!). 

Coordinating conjunctions are responsible for connecting two independent clauses in sentences, but can also be used to connect two words in a sentence. Here are two examples of coordinating conjunctions that connect two independent clauses in a sentence: 

He wanted to go to the movies, but he couldn’t find his car keys. 

They put on sunscreen, and they went to the beach. 

Next, here are two examples of coordinating conjunctions that connect two words: 

Would you like to cook or order in for dinner? 

The storm was loud yet refreshing. 

The two examples above show that coordinating conjunctions can connect different types of words as well. In the first example, the coordinating conjunction “or” connects two verbs; in the second example, the coordinating conjunction “yet” connects two adjectives. 

But wait! Why does the first set of sentences have commas while the second set of sentences doesn’t? When using a coordinating conjunction, put a comma before the conjunction when it’s connecting two complete sentences . Otherwise, there’s no comma necessary. 

Subordinating Conjunctions

Subordinating conjunctions are used to link an independent clause to a dependent clause in a sentence. This type of conjunction always appears at the beginning of a dependent clause, which means that subordinating conjunctions can appear at the beginning of a sentence or in the middle of a sentence following an independent clause. (If you’re unsure about what independent and dependent clauses are, be sure to check out our guide to compound sentences.) 

Here is an example of a subordinating conjunction that appears at the beginning of a sentence: 

Because we were hungry, we ordered way too much food. 

Now, here’s an example of a subordinating conjunction that appears in the middle of a sentence, following an independent clause and a comma: 

Rakim was scared after the power went out. 

See? In the example above, the subordinating conjunction after connects the independent clause Rakim was scared to the dependent clause after the power went out. Subordinating conjunctions include (but are not limited to!) the following words: after, as, because, before, even though, one, since, unless, until, whenever, and while. 

Correlative Conjunctions

Finally, correlative conjunctions are conjunctions that come in pairs, like both/and, either/or, and neither/nor. The two correlative conjunctions that come in a pair must appear in different parts of a sentence to make sense— they correlate the meaning in one part of the sentence with the meaning in another part of the sentence . Makes sense, right? 

Here are two examples of correlative conjunctions used in a sentence: 

We’re either going to the Farmer’s Market or the Natural Grocer’s for our shopping today. 

They’re going to have to get dog treats for both Piper and Fudge. 

Other pairs of correlative conjunctions include as many/as, not/but, not only/but also, rather/than, such/that, and whether/or. 

body-wow-interjection

Interjections are single words that express emotions that end in an exclamation point. Cool!

#7: Interjections 

Interjections are words that often appear at the beginning of sentences or between sentences to express emotions or sentiments such as excitement, surprise, joy, disgust, anger, or even pain. Commonly used interjections include wow!, yikes!, ouch!, or ugh! One clue that an interjection is being used is when an exclamation point appears after a single word (but interjections don’t have to be followed by an exclamation point). And, since interjections usually express emotion or feeling, they’re often referred to as being exclamatory. Wow! 

Interjections don’t come together with other parts of speech to form bigger grammatical units, like phrases or clauses. There also aren’t strict rules about where interjections should appear in relation to other sentences . While it’s common for interjections to appear before sentences that describe an action or event that the interjection helps explain, interjections can appear after sentences that contain the action they’re describing as well. 

Subclasses of Interjections, Including Examples

There are two main subclasses of interjections: primary interjections and secondary interjections. Let’s take a look at these two types of interjections!

Primary Interjections  

Primary interjections are single words, like oh!, wow!, or ouch! that don’t enter into the actual structure of a sentence but add to the meaning of a sentence. Here’s an example of how a primary interjection can be used before a sentence to add to the meaning of the sentence that follows it: 

Ouch ! I just burned myself on that pan!

While someone who hears, I just burned myself on that pan might assume that the person who said that is now in pain, the interjection Ouch! makes it clear that burning oneself on the pan definitely was painful. 

Secondary Interjections

Secondary interjections are words that have other meanings but have evolved to be used like interjections in the English language and are often exclamatory. Secondary interjections can be mixed with greetings, oaths, or swear words. In many cases, the use of secondary interjections negates the original meaning of the word that is being used as an interjection. Let’s look at a couple of examples of secondary interjections here: 

Well , look what the cat dragged in!

Heck, I’d help if I could, but I’ve got to get to work. 

You probably know that the words well and heck weren’t originally used as interjections in the English language. Well originally meant that something was done in a good or satisfactory way, or that a person was in good health. Over time and through repeated usage, it’s come to be used as a way to express emotion, such as surprise, anger, relief, or resignation, like in the example above. 

body-prepositional-phrases

This is a handy list of common prepositional phrases. (attanatta / Flickr) 

#8: Prepositions

The last part of speech we’re going to define is the preposition. Prepositions are words that are used to connect other words in a sentence—typically nouns and verbs—and show the relationship between those words. Prepositions convey concepts such as comparison, position, place, direction, movement, time, possession, and how an action is completed. 

Subclasses of Prepositions, Including Examples

The subclasses of prepositions are simple prepositions, double prepositions, participle prepositions, and prepositional phrases. 

Simple Prepositions

Simple prepositions appear before and between nouns, adjectives, or adverbs in sentences to convey relationships between people, living creatures, things, or places . Here are a couple of examples of simple prepositions used in sentences: 

I’ll order more ink before we run out. 

Your phone was beside your wallet. 

In the first example, the preposition before appears between the noun ink and the personal pronoun we to convey a relationship. In the second example, the preposition beside appears between the verb was and the possessive pronoun your.

In both examples, though, the prepositions help us understand how elements in the sentence are related to one another. In the first sentence, we know that the speaker currently has ink but needs more before it’s gone. In the second sentence, the preposition beside helps us understand how the wallet and the phone are positioned relative to one another! 

Double Prepositions

Double prepositions are exactly what they sound like: two prepositions joined together into one unit to connect phrases, nouns, and pronouns with other words in a sentence. Common examples of double prepositions include outside of, because of, according to, next to, across from, and on top of. Here is an example of a double preposition in a sentence: 

I thought you were sitting across from me. 

You see? Across and from both function as prepositions individually. When combined together in a sentence, they create a double preposition. (Also note that the prepositions help us understand how two people— you and I— are positioned with one another through spacial relationship.)  

Prepositional Phrases

Finally, prepositional phrases are groups of words that include a preposition and a noun or pronoun. Typically, the noun or pronoun that appears after the preposition in a prepositional phrase is called the object of the preposition. The object always appears at the end of the prepositional phrase. Additionally, prepositional phrases never include a verb or a subject. Here are two examples of prepositional phrases: 

The cat sat under the chair . 

In the example above, “under” is the preposition, and “the chair” is the noun, which functions as the object of the preposition. Here’s one more example: 

We walked through the overgrown field . 

Now, this example demonstrates one more thing you need to know about prepositional phrases: they can include an adjective before the object. In this example, “through” is the preposition, and “field” is the object. “Overgrown” is an adjective that modifies “the field,” and it’s quite common for adjectives to appear in prepositional phrases like the one above. 

While that might sound confusing, don’t worry: the key is identifying the preposition in the first place! Once you can find the preposition, you can start looking at the words around it to see if it forms a compound preposition, a double preposition of a prepositional phrase. 

body_quiz_tiles

10 Question Quiz: Test Your Knowledge of Parts of Speech Definitions and Examples

Since we’ve covered a lot of material about the 8 parts of speech with examples ( a lot of them!), we want to give you an opportunity to review and see what you’ve learned! While it might seem easier to just use a parts of speech finder instead of learning all this stuff, our parts of speech quiz can help you continue building your knowledge of the 8 parts of speech and master each one. 

Are you ready? Here we go:  

1) What are the 8 parts of speech? 

a) Noun, article, adverb, antecedent, verb, adjective, conjunction, interjection b) Noun, pronoun, verb, adverb, determiner, clause, adjective, preposition c) Noun, verb, adjective, adverb, pronoun, conjunction, interjection, preposition

2) Which parts of speech have subclasses?

a) Nouns, verbs, adjectives, and adverbs b) Nouns, verbs, adjectives, adverbs, conjunctions, and prepositions c) All of them! There are many types of words within each part of speech.

3) What is the difference between common nouns and proper nouns?

a) Common nouns don’t refer to specific people, places, or entities, but proper nouns do refer to specific people, places, or entities.  b) Common nouns refer to regular, everyday people, places, or entities, but proper nouns refer to famous people, places, or entities.  c) Common nouns refer to physical entities, like people, places, and objects, but proper nouns refer to nonphysical entities, like feelings, ideas, and experiences.

4) In which of the following sentences is the emboldened word a verb?

a) He was frightened by the horror film .   b) He adjusted his expectations after the first plan fell through.  c) She walked briskly to get there on time.

5) Which of the following is a correct definition of adjectives, and what other part of speech do adjectives modify?

a) Adjectives are describing words, and they modify nouns and noun phrases.  b) Adjectives are describing words, and they modify verbs and adverbs.  c) Adjectives are describing words, and they modify nouns, verbs, and adverbs.

6) Which of the following describes the function of adverbs in sentences?

a) Adverbs express frequency, degree, manner, time, place, and level of certainty. b) Adverbs express an action performed by a subject.  c) Adverbs describe nouns and noun phrases.

7) Which of the following answers contains a list of personal pronouns?

a) This, that, these, those b) I, you, me, we, he, she, him, her, they, them c) Who, what, which, whose

8) Where do interjections typically appear in a sentence?

a) Interjections can appear at the beginning of or in between sentences. b) Interjections appear at the end of sentences.  c) Interjections appear in prepositional phrases.

9) Which of the following sentences contains a prepositional phrase?

a) The dog happily wagged his tail.  b) The cow jumped over the moon.  c) She glared, angry that he forgot the flowers.

10) Which of the following is an accurate definition of a “part of speech”?

a) A category of words that serve a similar grammatical purpose in sentences. b) A category of words that are of similar length and spelling.  c) A category of words that mean the same thing.

So, how did you do? If you got 1C, 2C, 3A, 4B, 5A, 6A, 7B, 8A, 9B, and 10A, you came out on top! There’s a lot to remember where the parts of speech are concerned, and if you’re looking for more practice like our quiz, try looking around for parts of speech games or parts of speech worksheets online!

body_next

What’s Next? 

You might be brushing up on your grammar so you can ace the verbal portions of the SAT or ACT. Be sure you check out our guides to the grammar you need to know before you tackle those tests! Here’s our expert guide to the grammar rules you need to know for the SAT , and this article teaches you the 14 grammar rules you’ll definitely see on the ACT.

When you have a good handle on parts of speech, it can make writing essays tons easier. Learn how knowing parts of speech can help you get a perfect 12 on the ACT Essay (or an 8/8/8 on the SAT Essay ). 

While we’re on the topic of grammar: keep in mind that knowing grammar rules is only part of the battle when it comes to the verbal and written portions of the SAT and ACT. Having a good vocabulary is also important to making the perfect score ! Here are 262 vocabulary words you need to know before you tackle your standardized tests.  

Need more help with this topic? Check out Tutorbase!

Our vetted tutor database includes a range of experienced educators who can help you polish an essay for English or explain how derivatives work for Calculus. You can use dozens of filters and search criteria to find the perfect person for your needs.

Connect With a Tutor Now

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

speech of word meaning

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Etymology

speech (n.)

Middle English speche , from Old English spæc "act of speaking; power of uttering articulate sounds; manner of speaking; statement, discourse, narrative, formal utterance; language." It is a variant of Old English spræc , which is from Proto-Germanic *sprek- , *spek- (source also of Danish sprog , Old Saxon spraca , Old Frisian spreke , Dutch spraak , Old High German sprahha , German Sprache "speech"). See speak (v.).

The spr- forms were extinct in English by 1200. In reference to written words by c. 1200. The meaning "address delivered to an audience" is recorded by 1580s.

And I honor the man who is willing to sink
Half his present repute for the freedom to think,
And, when he has thought, be his cause strong or weak,
Will risk t' other half for the freedom to speak,
Caring naught for what vengeance the mob has in store,
Let that mob be the upper ten thousand or lower.
[James Russell Lowell, from "A Fable for Critics," 1848]
But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That, at any rate, is the theory of our Constitution. It is an experiment, as all life is an experiment. ... I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country. [Oliver Wendell Holmes Jr., dissent to "Abrams v. United States," 1919]

Entries linking to speech

Middle English speken , from Old English specan , variant of sprecan "to utter words articulately without singing, have or use the power of speech; make a speech; hold discourse" with others (class V strong verb; past tense spræc , past participle sprecen ), from Proto-Germanic *sprekanan (source also of Old Saxon sprecan , Old Frisian spreka , Middle Dutch spreken , Old High German sprehhan , German sprechen "to speak," Old Norse spraki "rumor, report"). This has sometimes been said to represent a PIE root meaning "to strew," on notion of speech as a "scattering" of words, but Boutkan finds no Indo-European etymology for the Germanic word.

In English the -r- began to drop out in Late West Saxon and was gone by mid-12c., perhaps from influence of Danish spage "crackle," also used in a slang sense of "speak" (compare crack (v.) in slang senses having to do with speech, such as wisecrack, cracker, all it's cracked up to be ). Elsewhere, rare variant forms without -r- are found in Middle Dutch ( speken ), Old High German ( spehhan ), dialectal German ( spächten "speak").

Apparently not the primary word for "to speak" in Old English (the "Beowulf" author prefers maþelian , from mæþel "assembly, council," from root of metan "to meet;" compare Greek agoreuo "to speak, explain," originally "speak in the assembly," from agora "assembly").

Also in Old English and Middle English as "to write, state or declare in writing." Of things, "be expressive or significative," by 1530s.

Speak  is more general in meaning than talk . Thus, a man may speak  by uttering a single word, whereas to talk  is to utter words consecutively ; so a man may be able to speak  without being able to talk . Speak is also more formal in meaning : as, to speak before an audience ; while talk implies a conversational manner of speaking. [Century Diuctionary]

To speak out is from late 14c. as "speak loudly;" by 1690s as "speak freely and boldly." To speak up "speak on behalf" (of another, etc.) is by 1705; as "speak loudly" by 1723. To speak for "make a speech on behalf of" is by c. 1300; to speak for itself "be self-evident" is by 1779.

Speaking terms "relationship between two in which they converse with one another" is from 1786, often in the negative. As a type of megaphone, s peaking-tube is by 1825; speaking-trumpet by 1670s.

"make a speech, harangue," especially "talk in a pompous, pontifical way," 1723, implied in speechifying , from speech + -ify . With humorous or contemptuous force. Related: S peechification .

speech-maker

  • stump speech
  • See all related words ( 6 ) >

Trends of speech

More to explore, share speech.

updated on April 20, 2023

Dictionary entries near speech

speculation

speculative

  • English (English)
  • 简体中文 (Chinese)
  • Deutsch (German)
  • Español (Spanish)
  • Français (French)
  • Italiano (Italian)
  • 日本語 (Japanese)
  • 한국어 (Korean)
  • Português (Portuguese)
  • 繁體中文 (Chinese)
  • International edition
  • Australia edition
  • Europe edition

a man with short grey hair in a blue shirt and tie speaks at a microphone

US Senate candidate apologizes for using racist slur while trying to say ‘bugaboo’

David Trone, a Maryland congressman, used a derogatory and offensive term for a Black person during a Capitol Hill hearing

  • US politics – latest updates

A Maryland Democratic congressman running for US Senate has apologised for using a racist slur during a hearing on Capitol Hill.

Speaking during a House budget committee hearing, David Trone said: “So this Republican jigaboo that it’s the tax rate that’s stopping business investment, it’s just completely faulty by people who have never run a business. They’ve never been there. They don’t have a clue what they’re talking about.”

“Jigaboo” is a derogatory and offensive term for a Black person. The Oxford English Dictionary says the word is of unknown origin, its first documented use found in a song from 1909.

Trone apologized in a statement to the Washington Post . “While attempting to use the word ‘bugaboo’ in a hearing, I misspoke and mistakenly used a phrase that is offensive,” he said.

“Upon learning the meaning of the word I was deeply disappointed to have accidentally used it, and I apologise.”

Merriam-Webster defines “bugaboo” as “an imaginary object of fear”.

In 2009, the rapper Jay-Z discussed with the Guardian his use of the N-word in his music, saying: “If you eliminate [it, racists will] say ‘monkey’ or ‘jigaboo’.”

The word “jigaboo” has recently been an occasional source of controversy .

Shalanda Young, the director of the Office of Management and Budget, to whom Trone was speaking on Thursday, is Black. She declined to comment to the Post.

In Maryland, Trone leads Democratic polling regarding the party race to contest the US Senate seat now filled by the retiring Ben Cardin.

Trone’s closest competitor, Angela Alsobrooks , a state politician, is Black. She also declined to comment.

In early polling, Trone and Alsobrooks trail Larry Hogan, the probable Republican nominee for Senate, a moderate who was a popular governor until 2023, when he was succeeded by Wes Moore, a Democrat and the first Black governor of the mid-Atlantic state.

Trone said the word he used “has a long dark terrible history” and “should never be used any time, anywhere, in any conversation.

“I recognise that as a white man, I have privilege. And as an elected official, I have a responsibility for the words I use – especially in the heat of the moment. Regardless of what I meant to say, I shouldn’t have used that language.”

  • US politics

Most viewed

speech of word meaning

Did Trump Say It Will Be a 'Bloodbath for the Country' If He Doesn't Get Elected?

Trump spoke about trade tariffs with china at a campaign rally in dayton, ohio, on march 16, 2024., david emery, published march 17, 2024.

Correct Attribution

About this rating

The context of the remark suggests Trump was predicting an "economic bloodbath" for the country, not a literal one, if he loses the 2024 presidential election.

On March 16, 2024, the hashtag "#bloodbath" trended sharply on social media in the wake of a Dayton, Ohio, campaign speech earlier that day by former U.S. President Donald Trump. Trump  stirred up controversy by claiming that if he didn't get elected for another presidential term, "It's going to be a bloodbath for the country."

A video recording of the speech from C-SPAN provides proof that he said exactly those words, which many partisan observers, such as the author of the X (formerly Twitter) post below, interpreted as a threat of post-election violence: 

OHIO RALLY: Trump threatens violent "BLOODBATH" if he's not re-elected https://t.co/edVPhuxgHE — #TheResistance (@SocialPowerOne1) March 17, 2024

The post above linked to an article on Occupy Democrats , a left-wing website, which pushed the "violent bloodbath" interpretation of Trump's words even as it acknowledged that, as the author put it, the context left "wiggle room" for interpretation. What was that context? Broadly speaking, it was economic. Trump was in the middle of talking about the U.S. automobile industry and the country's trade imbalance with China (emphasis added): 

China now is building a couple of massive plants where they're going to build the cars in Mexico and think, they think, that they're going to sell those cars into the United States with no tax at the border. Let me tell you something, to China, if you're listening, President Xi — and you and I are friends, but he understands the way I deal — those big, monster car-manufacturing plants that you're building in Mexico right now, and you think you're going to get that, you're going to not hire Americans, and you're going to sell the cars to us?  No, we're going to put a 100% tariff on every single car that comes across the line, and you're not going to be able to sell those cars if I get elected. Now, if I don't get elected, it's going to be a bloodbath for the whole — that's going to be the least of it, it's going to be a bloodbath for the country, that'll be the least of it. But they're not going to sell those cars, they're building massive factories.

As some social media users pointed out in lengthy threads debating what Trump really meant, popular dictionaries like Merriam-Webster include " major economic disaster " as a secondary meaning of "bloodbath." 

Ultimately, however, "bloodbath for the country" is an ambiguous figure of speech, and Trump has a controversial history of using violence-tinged language in reference to political opponents, which, even if the intent was metaphorical, sarcastic or just to get media attention, makes it unsurprising that his use of the phrase "bloodbath for the country" drew instant public criticism.

Bazzle, Stephanie. "OHIO RALLY: Trump Threatens Violent 'BLOODBATH' If He's Not Re-Elected." Occupy Democrats , 17 Mar. 2024, https://occupydemocrats.com/2024/03/17/ohio-rally-trump-threatens-violent-bloodbath-if-hes-not-re-elected/.

Definition of BLOODBATH . 12 Mar. 2024, https://www.merriam-webster.com/dictionary/bloodbath.

Former President Trump Campaigns for Bernie Moreno | C-SPAN.Org . https://www.c-span.org/video/?534259-1/president-trump-campaigns-bernie-moreno. Accessed 17 Mar. 2024.

Treene, Arit John, Kit Maher, Alayna. "Trump Warns of 'Bloodbath' for Auto Industry and Country If He Loses the Election | CNN Politics."  CNN , 17 Mar. 2024, https://www.cnn.com/2024/03/16/politics/trump-bloodbath-auto-industry-election/index.html.

By David Emery

David Emery is a West Coast-based writer and editor with 25 years of experience fact-checking rumors, hoaxes, and contemporary legends.

Article Tags

Congressman apologizes for using racial slur instead of saying ‘bugaboo’

U.s. senate hopeful rep. david trone (d-md.) made the comment while discussing tax rates in a budget hearing.

speech of word meaning

Senate hopeful Rep. David Trone (D) used a racial slur during a congressional budget hearing Thursday and later apologized, saying he misspoke and did not know what the word meant.

Trone dropped a derogatory word for Black people into a brief speech praising President Biden’s tax proposals toward the end of a friendly exchange with the director of the Office of Budget and Management.

After explaining that corporate tax rates did not influence how he invested hundreds of millions in his national liquor company, Trone continued:

“So this Republican jigaboo that it’s the tax rate that’s stopping business investment, it’s just completely faulty by people who have never run a business,” he said during a House Budget Committee hearing. “They’ve never been there. They don’t have a clue what they’re talking about.”

The slur is among the derogatory terms used to caricature Black people. After being contacted by The Washington Post hours after the remark, Trone apologized in a statement.

“Today while attempting to use the word ‘bugaboo’ in a hearing, I misspoke and mistakenly used a phrase that is offensive. Upon learning the meaning of the word I was deeply disappointed to have accidentally used it, and I apologize,” the statement said.

Trone, 68, is a leading Democrat in the May 14 primary race to succeed retiring Sen. Ben Cardin (D-Md.) and represent the most diverse state on the East Coast .

Polling released this week shows the race is wide open, with 39 percent of primary voters undecided and Trone leading his chief opponent, Prince George’s County Executive Angela D. Alsobrooks (D), by 7 percentage points. The winner is likely to face former Maryland governor Larry Hogan (R) in November’s general election.

Through a spokesperson, Alsobrooks declined to comment on Trone’s remark.

Trone began his comments Thursday appearing to read from prepared remarks and cordially welcoming OMB Director Shalanda Young, who is Black, and offering to yield his time to her if she had any points to bring up.

Young and Trone went on to agree that “something has to be done” about how little tax corporations and the wealthy pay, with Trone saying Biden’s proposed budget would help free up money to improve child care and increase pre-K access across the country. He calculated how many people could be put on food assistance programs if Amazon paid a 21 percent corporate tax rate.

“That’s a pretty good trade-off. Amazon can certainly afford that. Jeff Bezos can easily afford it, good Lord,” Trone said. (Amazon founder Bezos owns The Washington Post.)

Reached through staff, Young declined to comment on Trone’s remark.

Trone released an expanded statement later Thursday saying the word he used “has a long dark terrible history. It should never be used any time, anywhere, in any conversation. I recognize that as a white man, I have privilege. And as an elected official, I have a responsibility for the words I use — especially in the heat of the moment. Regardless of what I meant to say, I shouldn’t have used that language.”

  • Senior living resident sentenced to life for ‘soulless’ killing of 2 workers February 23, 2024 Senior living resident sentenced to life for ‘soulless’ killing of 2 workers February 23, 2024
  • Md. defendants could face jail as funds dry up for private home monitoring February 21, 2024 Md. defendants could face jail as funds dry up for private home monitoring February 21, 2024
  • A Maryland city’s apology for lynchings rings hollow for some March 11, 2024 A Maryland city’s apology for lynchings rings hollow for some March 11, 2024

speech of word meaning

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Meaning of speech in English

Your browser doesn't support HTML5 audio

speech noun ( SAY WORDS )

  • She suffers from a speech defect .
  • From her slow , deliberate speech I guessed she must be drunk .
  • Freedom of speech and freedom of thought were both denied under the dictatorship .
  • As a child , she had some speech problems .
  • We use these aids to develop speech in small children .
  • a problem shared is a problem halved idiom
  • banteringly
  • bull session
  • chew the fat idiom
  • conversation
  • conversational
  • put the world to rights idiom
  • take/lead someone on/to one side idiom
  • tête-à-tête

You can also find related words, phrases, and synonyms in the topics:

speech noun ( FORMAL TALK )

  • talk She will give a talk on keeping kids safe on the internet.
  • lecture The lecture is entitled "War and the Modern American Presidency."
  • presentation We were given a presentation of progress made to date.
  • speech You might have to make a speech when you accept the award.
  • address He took the oath of office then delivered his inaugural address.
  • oration It was to become one of the most famous orations in American history.
  • Her speech was received with cheers and a standing ovation .
  • She closed the meeting with a short speech.
  • The vicar's forgetting his lines in the middle of the speech provided some good comedy .
  • Her speech caused outrage among the gay community .
  • She concluded the speech by reminding us of our responsibility .
  • call for papers
  • extemporize
  • maiden speech
  • presentation
  • talk at someone

speech | Intermediate English

Speech noun ( talking ), examples of speech, collocations with speech.

These are words often used in combination with speech .

Click on a collocation to see more examples of it.

Translations of speech

Get a quick, free translation!

{{randomImageQuizHook.quizId}}

Word of the Day

a medical student or doctor

Paying attention and listening intently: talking about concentration

Paying attention and listening intently: talking about concentration

speech of word meaning

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists
  • speech (SAY WORDS)
  • speech (FORMAL TALK)
  • speech (TALKING)
  • Collocations
  • Translations
  • All translations

Add speech to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

Go to the homepage

Definition of 'speech'

IPA Pronunciation Guide

Video: pronunciation of speech

Youtube video

speech in American English

Speech in british english, examples of 'speech' in a sentence speech, related word partners speech, trends of speech.

View usage over: Since Exist Last 10 years Last 50 years Last 100 years Last 300 years

Browse alphabetically speech

  • speech bubble
  • speech clinic
  • All ENGLISH words that begin with 'S'

Related terms of speech

  • cued speech
  • free speech
  • hate speech
  • View more related words

Quick word challenge

Quiz Review

Score: 0 / 5

Image

Wordle Helper

Tile

Scrabble Tools

Image

UN calls for united action to combat rising Islamophobia

Displaced girls play at a UNICEF-supported learning space in Al Salam, Sudan.

Facebook Twitter Print Email

Amid a rising tide of anti-Muslim hate, top UN officials condemned the scourge on Friday as the General Assembly adopted a resolution to push back against it during commemorations marking the  International Day to Combat Islamophobia .

The new resolution, tabled by Pakistan, calls for, among other things, concerted action to fight ongoing violence against Muslims and requests the UN Secretary-General to appoint a special envoy to combat Islamophobia.

The world body  created the International Day through a  resolution adopted following attacks on two mosques Christchurch, New Zealand, that left 51 people dead on this day in 2019.

Prior to adopting the new resolution, by a vote of 113 in favour to none against, with 44 abstentions, a divided Assembly rejected by a close margin two amendments proposed by a group of European nations.

The proposals would have replaced key language in the resolution, including calling for a focal point instead of a UN special envoy and removing references to the desecration of the Quran.

Online hate speech ‘fuelling real-life violence’

The UN chief on Friday said “divisive rhetoric and misrepresentation are stigmatising communities ” and everyone must unite to combat intolerance, stereotypes and bias.

“ Online hate speech is fuelling real-life violence ,” Secretary-General António Guterres said in a  statement , emphasising that digital platforms must moderate hateful content and protect users from harassment.

Institutional discrimination and other barriers are violating the human rights and dignity of Muslims, and much of this disturbing trend is part of a wider pattern of attacks against religious groups and vulnerable populations, also including Jewish people, minority Christian communities and others, he added.

“We must confront and root out bigotry in all its forms,” he declared. “Leaders must condemn inflammatory discourse and safeguard religious freedom. “Together, let us commit to promoting mutual respect and understanding, foster social cohesion and build peaceful, just and inclusive societies for all.”

‘Faith literacy’ must combat religion-based hate

In Geneva, Volker Türk, UN High Commissioner for Human Rights ( OHCHR ), said all forms of religious hatred and intolerance are unacceptable.

“The message today is perhaps more urgent than ever: we are all well past the hour to restore peace, tolerance and respect ,” he said. “We know that fear breeds hate, ignorance and distrust of the other.”

“Islamophobia has stolen lives”, dehumanising entire communities and sparking “torrents of hate speech, magnified by social media”, he said, citing multiple reports on “huge spikes” in Islamophobic incidents amid the current conflict in the Middle East, with a nearly 600 per cent increase in some countries in North America and Europe.

UN_SPExperts

States must record such incidents and urgently step up their efforts to combat intolerance against people based on religion or belief using the many available tools at their disposal, including the OHCHR  guide to developing anti-discrimination legislation.

“Faith literacy – in other words, knowledge and understanding about the values of each religion and belief – is also crucial,” Mr. Türk said, urging States to include it as part of comprehensive training initiatives on combating religious hatred for law enforcement officers and the judiciary, faith-based actors, teachers and media professionals.

Anti-Muslim hate spikes

Also in Geneva, Nassima Baghli, Permanent Observer of the OIC, hosted a  commemorative event on Friday, saying that “Islamophobia is on the rise following the Israeli aggression on Gaza”.

Citing recent anti-Muslim incidents, she pointed to cases several months ago of the desecration of the Quran .

“Discrimination and stereotypes based on religion or belief are doing a lot of harm as they dehumanise people and prevent them from enjoying their rights,” Ms. Baghli said.

“We need to combat these scourges with great resolve with all the tools at our disposal,” she said. “Our common goal is to promote mutual understanding and respect for all.”

As millions around the world start observing the holy month of Ramadan, sadly in Gaza and across the region, many will mark this month facing conflict, displacement and fear.

UN rights experts: Nobody should fear having a religion

UN independent rights experts raised a range of concerns in a  statement issued on Friday, stressing that “States and faith-based actors have human rights responsibilities, and they have to step in to counter such violations” in line with the Rabat Plan of Action and the UN Faith for Rights framework and the #Faith4Rights toolkit to encourage respect for religious diversity.

“Across the world, we have witnessed attacks on mosques, cultural centres, schools and even private property belonging to Muslims ,” said the Human Rights Council -appointment experts, who are not UN staff and do not receive a salary.

“During this holy month of Ramadan, we are appalled at the continued refusal by Israel to allow adequate humanitarian assistance and food aid to be provided to the mainly Muslim civilian population in Gaza despite the widespread hunger and signs of severe malnutrition,” the experts said, also raising serious concerns about undue restrictions imposed on access to the Al Aqsa Mosque in Jerusalem and the destruction of a significant number of places of worship in Gaza .

“Nobody should suffer fear for having or manifesting their religion or belief,” they said. “ Everyone should feel safe and benefit from the equal protection of their human rights, which must be guaranteed by all States.”

  • Hate Speech
  • Share full article

Advertisement

Supported by

Trump’s Warning of a ‘Blood Bath’ if He Loses

More from our inbox:, orli and the fox, it was ‘poisoning,’ not an ‘overdose’, class-based admissions.

Donald Trump, seen from behind and at a distance, speaks to a large crowd from behind a lectern.

To the Editor:

Re “ Trump Says Some Migrants Are ‘Not People’ and Predicts a ‘Blood Bath’ if He Loses ” (nytimes.com, March 16):

In a campaign speech in Ohio on Saturday, former President Donald Trump said that if he didn’t get elected, “it’s going to be a blood bath for the country.”

His warning was not a prediction. This was a brazen threat: If the election disappoints Mr. Trump and his followers, they will revolt. Mr. Trump might be increasingly inarticulate, but the peril is clear in his own words.

In Trump v. Anderson , the Supreme Court recently dodged the question of whether Mr. Trump’s behavior before and on Jan. 6, 2021, made him an insurrectionist within the meaning of the 14th Amendment. After Saturday’s speech, further evasion will be unconscionable. Mr. Trump’s use of the phrase “blood bath” was not exaggeration for effect or bombast. It was simple menace, a direct step toward sedition. American democracy’s future is at stake.

Mr. Trump has effectively secured a major party’s nomination for president. It will soon be up to the voters to recognize the risk and prevent a reign of terror. The whole world is watching.

Steven S. Berizzi Norwalk, Conn.

Mr. Trump, I heard your words about a “blood bath.” How am I to interpret this?

You later claimed that you meant the auto industry. Now Mr. Trump, I don’t think that impassioned remark of yours about a blood bath was about cars. I suspect you know it wasn’t.

I look around at my neighborhood and think “blood bath” if you lose the election. We are a 50/50 community. Half for you and half not. So will there be a blood bath on the streets of El Dorado Hills, just 20 miles from California’s state capitol, if you lose? Are there Pretorian-style guards in my neighborhood just waiting for the phone call? Ready to drag my disabled daughter and me out of our home into our street to kill us?

What’s your plan, Mr. Trump? And spare the con man’s best comeback, “wait and see.” Blood bath. That is quite an expression.

Christine BauerEl Dorado Hills, Calif.

I will not be threatened. The threat by Donald Trump that if he is not elected, “it’s going to be a blood bath” demonstrates definitively, undeniably and unequivocally that he is not fit to be president of the United States.

That The New York Times failed to make this threat its lead story does a grave disservice to our nation.

Joan Kass Chilmark, Mass.

Had President Biden told the world that we should expect a “blood bath” if he loses the election, as Donald Trump did in Dayton, Ohio, on Saturday, there would have been clamoring of outrage and denunciation by his Republican opponents. Where is the outrage about Mr. Trump’s use of such language? Where are the voices of reason and restraint?

No Republican dares to criticize their fearless leader. Their profiles in cowardice are an appalling sight for all the civilized world to see.

When will our populace come to their senses? Conscientious, civilized people everywhere must reject this irresponsible bombast.

Mr. Trump is recklessly endangering every man, woman and child in our nation. He has gone too far and must be stopped.

H. James Quigley Jr. Laguna Woods, Calif.

Au contraire, Mr. Trump. If you lose this year’s November election, there will be dancing in the streets and champagne corks popped “the likes of which you’ve never seen.”

You can bank on it.

Lois Berkowitz Oro Valley, Ariz.

Re “ What Does a Year Mean to a Grieving Parent? ,” by Sarah Wildman (Opinion guest essay, March 17), about the first anniversary of the death of her daughter Orli:

At sunrise this morning, a fox crossed the road before me. She turned, studying me, and I felt the stir of the new day’s magic. A few hours later, I read Ms. Wildman’s essay, with the digital headline “‘If You See a Fox and I’ve Died, It Will Be Me.’”

I was struck by the coincidence of having just seen “a bright young fox” and reading Ms. Wildman’s brilliant and moving story. It seemed right to thank her for this gift: Orli’s bright young spirit flowing from her mother’s aching heart onto the page.

I wish I had met Orli. We would talk about foxes, look at fox photos, read passages from fox books. We would have shared how one day a fox came to us and never left, as Orli came to me through her mother’s story, how we each came to love foxes, as Orli herself will forever be loved, a beautiful child who is at last free to roam the bright universe.

Ellen Thornton Atlantic Beach, Fla.

Re “ In Fentanyl Deaths, Victims’ Families Say Word Choice Matters ” (front page, March 11):

The heartbreaking story of 19-year-old Ryan Bagwell’s death further underscores the urgent need to reshape the narrative around drug-related fatalities. The distinction between “overdose” and “poisoning” is not semantic; it reflects the profound impact on families and caregivers grappling with loss.

Moreover, the stigma associated with “overdose” compounds the pain for grieving families. It unfairly implies personal responsibility and addiction, perpetuating harmful stereotypes. We must recognize that victims like Ryan were unsuspecting casualties, not willing participants in their demise.

Family caregivers who tend to their loved ones as they struggle with health challenges, including addiction, understand the severity of this crisis. Fentanyl, a synthetic opioid exponentially more potent than heroin , is ravaging communities nationwide with a disproportionately higher lethal impact in American Indian, Alaska Native and Black populations .

Its presence in counterfeit pills poses lethal risks, as evidenced by Ryan’s case. The term “poisoning” rightly emphasizes the victimhood of those unknowingly exposed to this deadly substance.

In addition to diligent medication management to prevent accidental exposure to potent substances like fentanyl, the front lines of this crisis require comprehensive tools, including access to all F.D.A.-approved agents that reverse the effects of an opioid overdose, to respond in the event of a fentanyl poisoning.

Marvell Adams Jr. Baltimore The writer is the C.E.O. of the Caregivers Action Network.

Re “ What the Future of Admissions at Elite Schools Might Look Like ,” by David Leonhardt (The Morning, March 3):

In the wake of court decisions whittling away affirmative action, it is becoming known that class-based admissions policies can help maintain racial diversity at elite colleges.

SAT scores, when used properly, can help admissions officers identify minority applicants who show great potential for academic achievement. The test itself was designed to combat exclusion at a time when Ivy League schools had restrictive quotas on minority applicants.

Overcoming disadvantage is a legitimate criterion for admission and essential to efforts to diversify the student body.

Max Herman Jersey City, N.J. The writer is an associate professor of sociology at New Jersey City University.

StarsInsider

StarsInsider

Famous acronyms you should definitely know the meaning of

Posted: March 27, 2024 | Last updated: March 27, 2024

<p>We sometimes have a lot to say, but little time to say it. This is when acronyms come in for both formal and informal writing (and even speech). So what exactly are acronyms? Well, they're new words you create when you use the first letters of each word in a phrase or a list. Combining these letters you get a pronounceable word that other readers and listeners understand. Acronyms are shorthand abbreviations that, over time, become part of everyday language.</p> <p>You've probably seen some of these mixed in with common <a href="https://www.starsinsider.com/lifestyle/171942/the-oldest-words-in-the-english-language" rel="noopener">English</a> texting abbreviations. But what exactly do you mean? Find out by clicking through the following gallery.</p><p>You may also like: </p>

We sometimes have a lot to say, but little time to say it. This is when acronyms come in for both formal and informal writing (and even speech). So what exactly are acronyms? Well, they're new words you create when you use the first letters of each word in a phrase or a list. Combining these letters you get a pronounceable word that other readers and listeners understand. Acronyms are shorthand abbreviations that, over time, become part of everyday language.

You've probably seen some of these mixed in with common English texting abbreviations. But what exactly do you mean? Find out by clicking through the following gallery.

You may also like:

<p>The IMAX in IMAX Theater actually stands for Image Maximum, which is a large-format movie <a href="https://www.starsinsider.com/travel/448981/the-most-compelling-theater-and-music-venues-in-the-world" rel="noopener">theater</a>.</p>

The IMAX in IMAX Theater actually stands for Image Maximum, which is a large-format movie theater .

<p>RADAR stands for the technology called "radio detection and ranging." It's a device capable of detecting objects at far-off distances, and measuring by using electromagnetic waves.</p><p>You may also like:<a href="https://www.starsinsider.com/n/171139?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> The weirdest place names in Australia</a></p>

RADAR stands for the technology called "radio detection and ranging." It's a device capable of detecting objects at far-off distances, and measuring by using electromagnetic waves.

You may also like: The weirdest place names in Australia

<p>The electrical weapon is actually an acronym for "Thomas A. Swift's Electric Rifle." The name was inspired from the main character in Victor Appleton's 1911 novel 'Tom Swift and His Electric Rifle.'</p>

The electrical weapon is actually an acronym for "Thomas A. Swift's Electric Rifle." The name was inspired from the main character in Victor Appleton's 1911 novel 'Tom Swift and His Electric Rifle.'

<p>This piece of diving equipment is an acronym for "self-contained underwater breathing apparatus." The acronym is much easier to remember!</p><p>You may also like: </p>

This piece of diving equipment is an acronym for "self-contained underwater breathing apparatus." The acronym is much easier to remember!

<p>An essential device in order to use your phone, your SIM card is actually an abbreviation for "Subscriber Identity Module."</p>

An essential device in order to use your phone, your SIM card is actually an abbreviation for "Subscriber Identity Module."

<p>You probably use these acronyms every day. But did you know they stand for "ante meridiem" and "post meridiem"? It's Latin for before midday and after midday.</p><p>You may also like:<a href="https://www.starsinsider.com/n/284870?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Authors who hated their movie adaptations</a></p>

You probably use these acronyms every day. But did you know they stand for "ante meridiem" and "post meridiem"? It's Latin for before midday and after midday.

You may also like: Authors who hated their movie adaptations

<p>The American space agency actually stands for "National Aeronautics and Space Administration."</p>

The American space agency actually stands for "National Aeronautics and Space Administration."

<p>CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart." A mouthful! It's an interactive feature added to websites to distinguish whether a human or a robot is using the form.</p><p>You may also like:<a href="https://www.starsinsider.com/n/345376?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Deadly diseases you thought were gone...but aren't</a></p>

CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart." A mouthful! It's an interactive feature added to websites to distinguish whether a human or a robot is using the form.

You may also like: Deadly diseases you thought were gone...but aren't

<p>The letters in this word stands for "Light Amplification by Stimulated Emission of Radiation." Laser devices emit light through a process of optical amplification based on the stimulated emission of electromagnetic radiation.</p>

The letters in this word stands for "Light Amplification by Stimulated Emission of Radiation." Laser devices emit light through a process of optical amplification based on the stimulated emission of electromagnetic radiation.

<p>You probably wouldn't have guessed this, but Yahoo! stands for "Yet Another Hierarchically Organized Oracle."</p><p>You may also like:<a href="https://www.starsinsider.com/n/345762?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> The craziest souvenirs actors have taken from set </a></p>

You probably wouldn't have guessed this, but Yahoo! stands for "Yet Another Hierarchically Organized Oracle."

You may also like: The craziest souvenirs actors have taken from set

<p>This acronym means "before anyone else." It often refers to a boyfriend or girlfriend.</p>

This acronym means "before anyone else." It often refers to a boyfriend or girlfriend.

<p>UNICEF is the United Nations International Children’s Emergency Fund. They provide funding to help children in crisis around the world.</p><p>You may also like:<a href="https://www.starsinsider.com/n/386539?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Beer brands that Americans can't get enough of</a></p>

UNICEF is the United Nations International Children’s Emergency Fund. They provide funding to help children in crisis around the world.

You may also like: Beer brands that Americans can't get enough of

<p>In the US, a SWAT team is a police tactical unit that uses specialized or military equipment and tactics. The acronym stands for "Special Weapons and Tactics."</p>

In the US, a SWAT team is a police tactical unit that uses specialized or military equipment and tactics. The acronym stands for "Special Weapons and Tactics."

<p>URL is an acronym for "Uniform Resource Locator." It's a reference, or an address, to a resource on the internet.</p><p>You may also like:<a href="https://www.starsinsider.com/n/389140?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Movies and TV shows stopped by real-life tragedy</a></p>

URL is an acronym for "Uniform Resource Locator." It's a reference, or an address, to a resource on the internet.

You may also like: Movies and TV shows stopped by real-life tragedy

<p>People rarely, if ever, use the full name of the "joint photographic experts group," which is understandable!</p>

People rarely, if ever, use the full name of the "joint photographic experts group," which is understandable!

<p>That feeling of apprehension that one is either clueless or missing out on information, events, and experiences is known as "fear of missing out," or FOMO.</p><p>You may also like:<a href="https://www.starsinsider.com/n/425733?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Do you know the power color of your zodiac sign? </a></p>

That feeling of apprehension that one is either clueless or missing out on information, events, and experiences is known as "fear of missing out," or FOMO.

You may also like: Do you know the power color of your zodiac sign?

<p>POTUS is an acronym that stands for the President of the United States. For its part, FLOTUS stands for First Lady of the United States.</p>

POTUS is an acronym that stands for the President of the United States. For its part, FLOTUS stands for First Lady of the United States.

<p>When people search for these moving images, many don't know that GIF stands for "graphics interchange format."</p><p>You may also like:<a href="https://www.starsinsider.com/n/426588?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> These celebrities can't cook to save their lives</a></p>

When people search for these moving images, many don't know that GIF stands for "graphics interchange format."

You may also like: These celebrities can't cook to save their lives

<p>NATO is the North Atlantic Treaty Organization. Founded in 1949, they work to protect the freedom and security of member countries in North America and Europe.</p>

NATO is the North Atlantic Treaty Organization. Founded in 1949, they work to protect the freedom and security of member countries in North America and Europe.

<p>A famous acronym thanks to Drake, YOLO stands for "you only live once." It's commonly used to encourage exciting or thrilling experiences.</p><p>You may also like:<a href="https://www.starsinsider.com/n/430970?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> A look back at Queen Elizabeth's life in photos</a></p>

A famous acronym thanks to Drake, YOLO stands for "you only live once." It's commonly used to encourage exciting or thrilling experiences.

You may also like: A look back at Queen Elizabeth's life in photos

<p>When in all caps, the word PIN stands for "personal identification number." This is the secret number you use to access private documents, files, and account information.</p>

When in all caps, the word PIN stands for "personal identification number." This is the secret number you use to access private documents, files, and account information.

<p>ASAP is an acronym for "as soon as possible." This common phrase means you'll do something when you have the chance, or that you want someone to do something quickly.</p><p>You may also like:<a href="https://www.starsinsider.com/n/454578?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Famous real-life giants in history </a></p>

ASAP is an acronym for "as soon as possible." This common phrase means you'll do something when you have the chance, or that you want someone to do something quickly.

You may also like: Famous real-life giants in history

<p>FBI stands for the Federal Bureau of Investigations. The domestic intelligence and security service of the US was founded in 1908.</p>

FBI stands for the Federal Bureau of Investigations. The domestic intelligence and security service of the US was founded in 1908.

<p>DIY, or "do it yourself," is commonly used on websites or social media accounts that provide tutorials or help.</p><p>You may also like:<a href="https://www.starsinsider.com/n/480340?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> Why is wild swimming suddenly so popular?</a></p>

DIY, or "do it yourself," is commonly used on websites or social media accounts that provide tutorials or help.

You may also like: Why is wild swimming suddenly so popular?

<p>This abbreviation stands for "laugh out loud" and typically means something is very funny. If you haven't been living under a rock, this one should be rather obvious!</p>

This abbreviation stands for "laugh out loud" and typically means something is very funny. If you haven't been living under a rock, this one should be rather obvious!

<p>BRB stands for "be right back." It's often used in text or chat discussions when one person has to step away.</p><p>You may also like:<a href="https://www.starsinsider.com/n/491199?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> The most iconic singer-guitarist duos</a></p>

BRB stands for "be right back." It's often used in text or chat discussions when one person has to step away.

You may also like: The most iconic singer-guitarist duos

<p>If you need to write "for your information" in texts or messaging apps, then use FYI.</p>

If you need to write "for your information" in texts or messaging apps, then use FYI.

<p>Commonly assumed to stand for "save our ship,"  the international distress signal doesn’t actually stand for anything!</p> <p>Sources: (Reader's Digest) (Insider)</p> <p>See also: <a href="https://www.starsinsider.com/lifestyle/495511/english-words-borrowed-from-foreign-languages">English words borrowed from foreign languages</a></p><p>You may also like:<a href="https://www.starsinsider.com/n/494932?utm_source=msn.com&utm_medium=display&utm_campaign=referral_description&utm_content=537630en-en"> The zodiac signs of current world leaders</a></p>

Commonly assumed to stand for "save our ship,"  the international distress signal doesn’t actually stand for anything!

Sources: (Reader's Digest) (Insider)

See also: English words borrowed from foreign languages

You may also like: The zodiac signs of current world leaders

More for You

Russians urge punishment for those who carried out concert attack

Russians urge punishment for those who carried out concert attack

Famous debuts: the first roles of Hollywood stars

Famous debuts: the first roles of Hollywood stars

An unpredictable cost of living

Shrinkflation: How the food industry is quietly ripping you off

Euro 2024

Why England will end 58 years of heartache and win Euro 2024

Pete Davidson

10 Comedians Who Apologized For Their Jokes

How to cope with a dysfunctional family

How to cope with a dysfunctional family

Ukraine ramps up spending on homemade weapons to help repel Russia

Ukraine ramps up spending on homemade weapons to help repel Russia

Return to Lokoja, you’re not Yahaya Bello’s errand boy – Okai tells Gov Ododo

Return to Lokoja, you’re not Yahaya Bello’s errand boy – Okai tells Gov Ododo

How to get naturally shiny hair

How to get naturally shiny hair

Let's look at the Artemis III mission

NASA's next-generation spacesuit is absolutely amazing

Passed away at the age of 43

The tragic death of former boxing world champion Alesia Graf at the age of 43

Romantic movies that will drag you down to earth

Romantic movies that will drag you down to earth

The name of the game

Kickball: Not just for children anymore!

Germany has long been a target of Russian disinformation campaigns

Hybrid warfare: Germany braces for Russian influence operations

Ways to improve your critical thinking

Ways to improve your critical thinking

After 9 yrs, Yokohama comes back as No. 1 cruise ship port in Japan

After 9 yrs, Yokohama comes back as No. 1 cruise ship port in Japan

Car-making is hard.

England’s best dead car companies - which do you remember?

You might be surprised by number one

Which airlines have had the most crashes?

Lionel Messi, Cristiano Ronaldo, January 2016

Messi did not ‘care’ about Ronaldo because ‘he was the best’ so rivalry was ‘not his priority’

Aankh Micholi ON SET: Sumedh comes to know that Rukhmini wanted to be a police officer

Aankh Micholi ON SET: Sumedh comes to know that Rukhmini wanted to be a police officer

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

speech of word meaning

  • Society and culture
  • Community and society

Government strengthens approach to counter extremism

Definition of extremism updated to respond to increased extremist threat since October 7 terror attacks in Israel.

speech of word meaning

  • Definition of extremism updated to respond to increased extremist threat since October 7 terror attacks in Israel
  • New engagement principles published to ensure government does not legitimise extremist groups 
  • Follows Prime Minister’s commitment to stamp out extremism to ensure we keep our citizens safe and our country secure                 

An updated, more focused definition designed to help tackle the ever-evolving threat of extremism in the UK has been published by the government today (14 March 2024).

The updated and more precise definition of extremism will be used by government departments and officials alongside a set of engagement principles , to ensure they are not inadvertently providing a platform, funding or legitimacy to groups or individuals who attempt to advance extremist ideologies that negate our fundamental rights and freedoms and overturn the UK’s system of liberal parliamentary democracy. This definition is not statutory and has no effect on the existing criminal law – it applies to the operations of government itself.

Since the 7 October Hamas terror attacks in Israel concerns have been raised about the wide-ranging risk of radicalisation. On hate crime, since 7 October the Community Security Trust recorded 4,103 antisemitic incidents in the UK in 2023, an increase of 147% compared to 2022, and Tell MAMA recorded a 335% increase in anti-Muslim hate cases in the last four months. 

As the Prime Minister said recently, this kind of behaviour and intimidation is unacceptable, does not reflect the values of the United Kingdom and must be resisted at all times.

The new definition and engagement principles will make sure those who promote extreme ideologies or spread hate in their communities are not legitimised through their interactions with government. Following publication, the government will undertake a robust process to assess groups for extremism against the definition, which will then inform decisions around government engagement and funding.

It is the first in a series of steps to promote social cohesion, democratic resilience, and to counter extremism and religious hatred. 

Michael Gove, Secretary of State for Levelling Up, Housing and Communities said: 

The United Kingdom is a success story – a multi-national, multi-ethnic, multi-faith democracy. It is stronger because of its diversity.   But our democracy and our values of inclusivity and tolerance are under challenge from extremists. In order to protect our democratic values, it is important both to reinforce what we have in common and to be clear and precise in identifying the dangers posed by extremism.  The pervasiveness of extremist ideologies has become increasingly clear in the aftermath of the 7 October attacks and poses a real risk to the security of our citizens and our democracy. This is the work of Extreme Right-Wing and Islamist extremists who are seeking to separate Muslims from the rest of society and create division within Muslim communities. They seek to radicalise individuals, deny people their full rights, suppress freedom of expression, incite hatred, and undermine our democratic institutions. Today’s measures will ensure that government does not inadvertently provide a platform to those setting out to subvert democracy and deny other people’s fundamental rights. This is the first in a series of measures to tackle extremism and protect our democracy.

The new definition provides a stricter characterisation that government can use to make sure that extremist organisations and individuals are not being legitimised or given a platform through their interactions with government. It reads: 

Extremism is the promotion or advancement of an ideology based on violence, hatred or intolerance, that aims to: 

  • negate or destroy the fundamental rights and freedoms of others; or
  • undermine, overturn or replace the UK’s system of liberal parliamentary democracy and democratic rights; or
  • intentionally create a permissive environment for others to achieve the results in (1) or (2).

The new definition is narrower and more precise than the 2011 Prevent definition, which did not provide the detail we now need to assess and identify extremism. This new definition helps clearly articulate how extremism is evidenced through the public behaviour of extremists that advance their violent, hateful or intolerant aims.

It draws on the work of Dame Sara Khan and Sir Mark Rowley’s 2021 Operating with Impunity Report and addresses key recommendations from the 2023 Independent Review of Prevent .

The definition is clear that extremism involves advancing or promoting an ideology based on violence, hatred or intolerance, a high bar that only captures the most concerning of activities. It is not about silencing those with private and peaceful beliefs – nor will it affect free speech, which will always be protected.

It does not create new powers, it instead helps the government and our partners better to identify extremist organisations, individuals and behaviours. 

Alongside the new definition, the government is also publishing a set of engagement principles which are designed to help officials to engage more widely whilst mitigating the risk of undertaking engagement that undermines government’s core aims to:

  • maintain public confidence in government
  • uphold democratic values
  • protect the rights and freedoms of others

UK Ministerial departments will be expected to consider the engagement standards when deciding whether to move forward with engagement with groups that meet the new definition. This will ensure the government does not meet, fund or provide a platform to extremist groups or individuals. It will also apply to the honours system and due diligence for public appointments. Non-central government institutions, such as arms-length bodies, higher education institutions and independent organisations including the police and CPS, will not be obliged to adopt the definition or apply the engagement principles initially.

To ensure that government has the tools it needs to effectively counter extremism, a new counter-extremism centre of excellence has been established in the Department for Levelling Up, Housing and Communities. This unit will provide leadership for the cross-government counter-extremism community, ensure consistent application of the definition and engagement standards, and take the lead on producing strategic assessments of extremism.

This team will draw on the expertise of the Commission for Countering Extremism as well as counter extremism policy fellows – some of the country’s foremost counter-extremism experts – will join the centre of excellence to ensure the very best academic insight is shaping our approach.  

Lord Walney, Independent Adviser on Political Violence and Disruption, said: 

The threat to Britain from extremists includes those who may not use violence directly yet target our core values, so it is welcome that this updated definition includes those who seek to undermine or replace liberal democracy.  Greater clarity in defining extremism can underpin a concerted approach across civil society to protect our country.

Professor Ian Acheson, Senior Advisor, Counter Extremism Project said: 

These are necessary next steps to confront and deter those who advocate for violent extremism. Hateful anti-British ideas that undermine our democracy creating intimidation and fear need ideologues to drive them. It is intolerable that the state underwrites people and organisations poisoning community life in one of the most successful multi-ethnic countries in the world.

Share this page

The following links open in a new tab

  • Share on Facebook (opens in new tab)
  • Share on Twitter (opens in new tab)

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.

IMAGES

  1. Parts of Speech in English, Definition and Examples

    speech of word meaning

  2. How To Address A Speech In English

    speech of word meaning

  3. Parts of Speech

    speech of word meaning

  4. 100 Basic English Word Meanings

    speech of word meaning

  5. Parts of Speech Definitions and Types with Examples

    speech of word meaning

  6. Parts of Speech: Definition and Useful Examples in English

    speech of word meaning

VIDEO

  1. ROBLOX TEXT TO SPEECH WORD STORY

  2. kinds of noun

  3. Different part of speech

  4. Text to speech word count! #texttospeech #roblox #story #wordcount #viral #capcut

  5. FULL SPEECH: WORD WAR PBBM VS. FPRRD

  6. PARTS OF SPEECH

COMMENTS

  1. Speech Definition & Meaning

    speech: [noun] the communication or expression of thoughts in spoken words. exchange of spoken words : conversation.

  2. Word Meaning

    Word meaning has played a somewhat marginal role in early contemporary philosophy of language, which was primarily concerned with the structural features of sentence meaning and showed less interest in the nature of the word-level input to compositional processes. Nowadays, it is well-established that the study of word meaning is crucial to the ...

  3. SPEECH

    SPEECH definition: 1. the ability to talk, the activity of talking, or a piece of spoken language: 2. the way a…. Learn more.

  4. SPEECH Definition & Usage Examples

    Speech definition: the faculty or power of speaking; oral communication; ability to express one's thoughts and emotions by speech sounds and gesture. See examples of SPEECH used in a sentence.

  5. Word Definition & Meaning

    word: [noun] a speech sound or series of speech sounds that symbolizes and communicates a meaning usually without being divisible into smaller units capable of independent use. the entire set of linguistic forms produced by combining a single base with various inflectional elements without change in the part of speech elements. a written or ...

  6. Speech

    speech: 1 n (language) communication by word of mouth "his speech was garbled" Synonyms: language , oral communication , speech communication , spoken communication , spoken language , voice communication Examples: Strategic Arms Limitation Talks negotiations between the United States and the Union of Soviet Socialist Republics opened in 1969 ...

  7. Speech

    Speech is a human vocal communication using language.Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are the same word, e.g., "role" or "hotel"), and using those words in their semantic character as words in the lexicon of a language according to the syntactic ...

  8. SPEECH definition and meaning

    6 meanings: 1. a. the act or faculty of speaking, esp as possessed by persons b. (as modifier) 2. that which is spoken;.... Click for more definitions.

  9. speech noun

    Synonyms speech speech lecture address talk sermon These are all words for a talk given to an audience. speech a formal talk given to an audience:. Several people made speeches at the wedding. lecture a talk given to a group of people to tell them about a particular subject, often as part of a university or college course:. a lecture on the Roman army

  10. SPEECH

    SPEECH definition: 1. someone's ability to talk, or an example of someone talking: 2. a formal talk that someone…. Learn more.

  11. speech noun

    5 [countable] a group of lines that an actor speaks in a play in the theater She has the longest speech in the play. see figure of speech; Thesaurus speech. lecture; address; talk; sermon; These are all words for a talk given to an audience. speech a formal talk given to an audience: Several people made speeches at the wedding.

  12. speech

    From Longman Dictionary of Contemporary English Related topics: Linguistics, Theatre speech /spiːtʃ/ S2 W2 noun 1 [ countable] a talk, especially a formal one about a particular subject, given to a group of people make/give/deliver a speech Each child had to give a short speech to the rest of the class. He has to make a lot of after-dinner ...

  13. speech

    The meaning of speech. Definition of speech. English dictionary and integrated thesaurus for learners, writers, teachers, and students with advanced, intermediate, and beginner levels. ... I'm nervous about having to make a speech at the wedding. synonyms: address, talk similar words: oration: definition 3:

  14. The 8 Parts of Speech

    A part of speech (also called a word class) is a category that describes the role a word plays in a sentence.Understanding the different parts of speech can help you analyze how words function in a sentence and improve your writing. The parts of speech are classified differently in different grammars, but most traditional grammars list eight parts of speech in English: nouns, pronouns, verbs ...

  15. speech

    speech - WordReference English dictionary, questions, discussion and forums. All Free. WordReference.com | ... not necessarily articulate or even vocal (any set of signs, signals, or symbols that convey meaning, including written words, may be called language): a spoken language.

  16. Understanding the 8 Parts of Speech: Definitions and Examples

    In the English language, it's commonly accepted that there are 8 parts of speech: nouns, verbs, adjectives, adverbs, pronouns, conjunctions, interjections, and prepositions. Each of these categories plays a different role in communicating meaning in the English language. Each of the eight parts of speech—which we might also call the "main ...

  17. speech

    speech. (n.). Middle English speche, from Old English spæc "act of speaking; power of uttering articulate sounds; manner of speaking; statement, discourse, narrative, formal utterance; language." It is a variant of Old English spræc, which is from Proto-Germanic *sprek-, *spek-(source also of Danish sprog, Old Saxon spraca, Old Frisian spreke, Dutch spraak, Old High German sprahha, German ...

  18. Speech Vs. Speach: What's The Correct Spelling Of This Word?

    Between the two words, "speech" is the correct spelling. Speech means "to formally address an audience.". Another meaning for speech is "the ability to communicate with others with spoken language.". However, many often misspell this word as "speach" because the two words sound and look the same. There are 26 letters in English ...

  19. Parts of Speech: Complete Guide (With Examples and More)

    The parts of speech refer to categories to which a word belongs. In English, there are eight of them : verbs , nouns, pronouns, adjectives, adverbs, prepositions, conjunctions, and interjections. Many English words fall into more than one part of speech category. Take the word light as an example. It can function as a verb, noun, or adjective.

  20. US Senate candidate apologizes for using racist slur while trying to

    The word "jigaboo" has recently been an occasional source of controversy. Shalanda Young, the director of the Office of Management and Budget, to whom Trone was speaking on Thursday, is Black ...

  21. Did Trump Say It Will Be a 'Bloodbath for the Country' If He Doesn't

    Ultimately, however, "bloodbath for the country" is an ambiguous figure of speech, and Trump has a controversial history of using violence-tinged language in reference to political opponents ...

  22. Rep. David Trone (D-Md.) says he misspoke when he uttered racial slur

    Trone dropped a derogatory word for Black people into a brief speech praising President Biden's tax proposals toward the end of a friendly exchange with the director of the Office of Budget and ...

  23. SPEECH

    SPEECH meaning: 1. the ability to talk, the activity of talking, or a piece of spoken language: 2. the way a…. Learn more.

  24. A speech that sent shockwaves from Washington to Jerusalem

    Schumer noted that his name is derived from the Hebrew word Shomer, which means guardian, and spoke about how deeply he revered a country "surrounded by vicious enemies." He added: "We love ...

  25. SPEECH definition in American English

    speech in American English. (spitʃ ) noun. 1. the act of speaking; expression or communication of thoughts and feelings by spoken words. 2. the power or ability to speak. 3. the manner of speaking.

  26. Using prefixes and suffixes to form opposites and new forms of words in

    prefix A word, letter, or number placed before another to form a different word. in- and im- can change the meaning of a French word to make it mean its opposite. Suffixes. close. suffix A letter ...

  27. UN calls for united action to combat rising Islamophobia

    States must record such incidents and urgently step up their efforts to combat intolerance against people based on religion or belief using the many available tools at their disposal, including the OHCHR guide to developing anti-discrimination legislation. "Faith literacy - in other words, knowledge and understanding about the values of each religion and belief - is also crucial," Mr ...

  28. Opinion

    In a campaign speech in Ohio on Saturday, former President Donald Trump said that if he didn't get elected, "it's going to be a blood bath for the country.". His warning was not a ...

  29. Famous acronyms you should definitely know the meaning of

    Combining these letters you get a pronounceable word that other readers and listeners understand. Acronyms are shorthand abbreviations that, over time, become part of everyday language. You've ...

  30. Government strengthens approach to counter extremism

    Extremism is the promotion or advancement of an ideology based on violence, hatred or intolerance, that aims to: negate or destroy the fundamental rights and freedoms of others; or. undermine ...