English Grammar Quiz for ESL learners

Reported Speech Quiz

You can do this grammar quiz online or print it on paper. It tests what you learned on the Reported Speech pages.

1. Which is a reporting verb?

2. He said that it was cold outside. Which word is optional?

3. "I bought a car last week." Last week he said he had bought a car

4. "Where is it?" said Mary. She

5. Which of these is usually required with reported YES/NO questions?

6. Ram asked me where I worked. His original words were

7. "Don't yell!" is a

8. "Please wipe your feet." I asked them to wipe

9. She always asks me not to burn the cookies. She always says

10. Which structure is not used for reported orders?

Your score is:

Correct answers:

  • Types of Verbs
  • Types of Adjectives
  • Types of Noun
  • Participles
  • Phrases and Clauses
  • Parts of Speech
  • Parts of a Sentence
  • Determiners
  • Parallelism
  • Direct & Indirect Speech
  • Modal Verbs
  • Relative Clauses
  • Nominalisation
  • Substitution & Ellipsis
  • Demonstratives
  • Pronoun Reference
  • Confusing Words
  • Online Grammar Quizzes
  • Printable Grammar Worksheets
  • Courses to purchase
  • Grammar Book
  • Grammar Blog

Reported Speech Quiz

In this reported speech quiz you get to practice online turning direct speech into indirect speech.

Remember that to turn direct speech to reported speech you need to use backshifting with the tenses. So for example, the present simple turns to the past simple and the past simple turns to the past perfect. Pronouns can also change.

It can be difficult if you are new to it, so if you are unsure of how to do it, before taking the quiz check out the reported speech tense conversion rules . 

  • John said, "I want to see a film".
  • Tina said, "I am tired".
  • He said, "Tom hit me very hard".
  • I said, "I feel happy".
  • She said, "We are learning English".
  • Sandra said, "I liked him a lot".
  • He said, "We all eat meat".
  • Max said, "I will help".
  • Gene said, "I must leave early".
  • She said, "I had tried everything".

More on Reported Speech:

Reported speech tenses may differ from the tense of the direct speech. The general rule for tenses in reported speech is that it changes to the past tense. This is called backshifting.

Reported Speech Tenses Chart: How to convert tenses

Reported speech tenses may differ from the tense of the direct speech. The general rule for tenses in reported speech is that it changes to the past tense. This is called backshifting.

Direct and indirect speech are different because in direct speech the exact words said are spoken, but in indirect or reported speech, we are reporting what was said, usually using the past tense.

Direct and Indirect Speech: The differences explained

Direct and indirect speech are different because in direct speech the exact words said are spoken, but in indirect or reported speech, we are reporting what was said, usually using the past tense.

In these examples of direct and indirect speech you are given a sentence in direct speech which is then connected to indirect speech.

Examples of Direct and Indirect Speech

In these examples of direct and indirect speech you are given a sentence in direct speech which is then connected to indirect speech.

Reported speech imperatives, also known as reported commands, follow a slightly different structure to normal indirect speech. We use imperatives to give orders, advice, or make requests.

Reported Speech Imperatives: Reporting commands in indirect speech

Reported speech imperatives, also known as reported commands, follow a slightly different structure to normal indirect speech. We use imperatives to give orders, advice, or make requests.

Sign up for free grammar tips, quizzes and lessons, straight into your inbox

New! Comments

Any questions or comments about the grammar discussed on this page?

Post your comment here.

test reported speech variant 1

Grammar Rules

Subscribe to grammar wiz:, grammar ebook.

English Grammar Book

This is an affiliate link

Recent Articles

RSS

Zero Article Rules with Examples

Apr 13, 24 02:33 AM

Definite, Indefinite and Zero Articles Explained

Apr 05, 24 09:40 AM

Definite and Indefinite Articles Quiz

Mar 30, 24 12:58 PM

Important Pages

Online Quizzes Courses Blog

Connect with Us

Youtube

Search Site

Privacy Policy  / Disclaimer  / Terms of Use

Sorry. Your browser does not support JavaScript, or JavaScript is disabled. None of the interactive quizzes on the site will work, nor will site navigation.

However, you can access the information and advice pages via the site map at:

→ Site map

Reported Speech

Perfect english grammar.

test reported speech variant 1

Reported Statements

Here's how it works:

We use a 'reporting verb' like 'say' or 'tell'. ( Click here for more about using 'say' and 'tell' .) If this verb is in the present tense, it's easy. We just put 'she says' and then the sentence:

  • Direct speech: I like ice cream.
  • Reported speech: She says (that) she likes ice cream.

We don't need to change the tense, though probably we do need to change the 'person' from 'I' to 'she', for example. We also may need to change words like 'my' and 'your'. (As I'm sure you know, often, we can choose if we want to use 'that' or not in English. I've put it in brackets () to show that it's optional. It's exactly the same if you use 'that' or if you don't use 'that'.)

But , if the reporting verb is in the past tense, then usually we change the tenses in the reported speech:

  • Reported speech: She said (that) she liked ice cream.

* doesn't change.

  • Direct speech: The sky is blue.
  • Reported speech: She said (that) the sky is/was blue.

Click here for a mixed tense exercise about practise reported statements. Click here for a list of all the reported speech exercises.

Reported Questions

So now you have no problem with making reported speech from positive and negative sentences. But how about questions?

  • Direct speech: Where do you live?
  • Reported speech: She asked me where I lived.
  • Direct speech: Where is Julie?
  • Reported speech: She asked me where Julie was.
  • Direct speech: Do you like chocolate?
  • Reported speech: She asked me if I liked chocolate.

Click here to practise reported 'wh' questions. Click here to practise reported 'yes / no' questions. Reported Requests

There's more! What if someone asks you to do something (in a polite way)? For example:

  • Direct speech: Close the window, please
  • Or: Could you close the window please?
  • Or: Would you mind closing the window please?
  • Reported speech: She asked me to close the window.
  • Direct speech: Please don't be late.
  • Reported speech: She asked us not to be late.

Reported Orders

  • Direct speech: Sit down!
  • Reported speech: She told me to sit down.
  • Click here for an exercise to practise reported requests and orders.
  • Click here for an exercise about using 'say' and 'tell'.
  • Click here for a list of all the reported speech exercises.

Seonaid Beckwith

Hello! I'm Seonaid! I'm here to help you understand grammar and speak correct, fluent English.

method graphic

Read more about our learning method

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, more than words: extra-sylvian neuroanatomic networks support indirect speech act comprehension and discourse in behavioral variant frontotemporal dementia.

test reported speech variant 1

  • 1 Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
  • 2 Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
  • 3 Penn Image Computing and Science Laboratory, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
  • 4 Center for Neurodegenerative Disease Research, Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States

Indirect speech acts—responding “I forgot to wear my watch today” to someone who asked for the time—are ubiquitous in daily conversation, but are understudied in current neurobiological models of language. To comprehend an indirect speech act like this one, listeners must not only decode the lexical-semantic content of the utterance, but also make a pragmatic, bridging inference. This inference allows listeners to derive the speaker’s true, intended meaning—in the above dialog, for example, that the speaker cannot provide the time. In the present work, we address this major gap by asking non-aphasic patients with behavioral variant frontotemporal dementia (bvFTD, n = 21) and brain-damaged controls with amnestic mild cognitive impairment (MCI, n = 17) to judge simple question-answer dialogs of the form: “Do you want some cake for dessert?” “I’m on a very strict diet right now,” and relate the results to structural and diffusion MRI. Accuracy and reaction time results demonstrate that subjects with bvFTD, but not MCI, are selectively impaired in indirect relative to direct speech act comprehension, due in part to their social and executive limitations, and performance is related to caregivers’ judgment of communication efficacy. MRI imaging associates the observed impairment in bvFTD to cortical thinning not only in traditional language-associated regions, but also in fronto-parietal regions implicated in social and executive cerebral networks. Finally, diffusion tensor imaging analyses implicate white matter tracts in both dorsal and ventral projection streams, including superior longitudinal fasciculus, frontal aslant, and uncinate fasciculus. These results have strong implications for updated neurobiological models of language, and emphasize a core, language-mediated social disorder in patients with bvFTD.

Introduction

“The chief end of language in communication is to be understood, and words don’t serve well for that end—whether in everyday or in philosophical discourse—when some word fails to arouse in the hearer the idea it stands for in the mind of the speaker.” –John Locke (1689), “An Essay Concerning Human Understanding”

To paraphrase the famed English philosopher John Locke, human communication does not depend only on decoding the individual meanings of words per se , but rather decoding the speaker’s idea represented by those words. Indeed, we do not communicate by volleying single words back and forth in isolation: we communicate through stories, narratives, and conversations ( Bell, 2002 ; Kellas, 2005 ). This is a critical point that bears significant implications for the experimental methodology adopted by neuroscientists and the theoretical frameworks they endorse in studying language. From this perspective, the methodology we have used to date—studying the neural basis of phonology, morphology, syntax, and semantics—may be too narrowly focused, as these elements alone are often insufficient for comprehension. Instead, when we consider language in an interactive real-world context—as language for communication —we recognize that language is polysemous and consequently, listeners must make pragmatic, bridging inferences in order to derive a speaker’s true meaning. In the present study, we address this major gap in traditional neurobiological models of language by focusing on the highly common but often overlooked inferential component of conversational speech.

Indirect speech acts, which are ubiquitous in daily communication, are a canonical example of natural, inferential language. Consider, for instance, if Sally asks Betty, “Do you want some cake for dessert?” and Betty sadly replies, “I’m on a very strict diet right now.” In the given exchange, Sally can easily infer that Betty is declining the cake, even though it is not explicitly stated in her reply. Although indirect speech epitomizes the resource-demanding, socially constrained nature of language, its processing appears to be both quick and effortless ( Clark, 1979 ). Still unknown, however, are the brain correlates of this remarkable feat: what are the cognitive and neural substrates of indirect speech act comprehension?

Historical investigations into the neurobiology of language have typically been limited to studies of speech sounds, words, and sentences. Pioneered by the physicians Paul Broca and Carl Wernicke, the resulting “Wernicke-Lichtheim-Geschwind” (WLG) model emphasizes two primary hubs in left hemisphere peri-Sylvian cortex: the inferior frontal gyrus, specific for language production, and the posterior superior temporal gyrus, specific for language comprehension. While we have now developed a more nuanced understanding of the contributions of these brain regions in supporting language, the WLG model cannot fully account for the complexities of real-world language and communication—how we integrate utterances with prior context so effortlessly, make inferences about speaker meaning, and engage in the rapid back and forth of conversation ( Tremblay and Dick, 2016 ; Hasson et al., 2018 ).

More recently, we have begun to study natural language discourse—that is, the social use of language, or language for communication. Discourse typically has a supra-sentential structure, and consequently, may require additional neurocognitive resources to disambiguate meaning. Despite its ubiquity in daily language, however, scant attention has been paid to indirect speech acts like the one above—communicative exchanges in which the intended speaker meaning is not directly coded in the lexical-semantic content of the utterance itself ( Grice, 1975 ; Searle, 1975 ). To address this major gap in natural language use, we study indirect replies, a subtype of indirect speech that boasts several theoretical advantages over previous language domains used to study discourse: (1) they are relatively short and can be tightly controlled, unlike lengthy narratives; (2) their meaning does not become “frozen” due to repeated usage, as with metaphors, idioms, or proverbs; (3) they do not have an affective component, which typically characterizes irony and sarcasm; and (4) they involve an interactive exchange between speakers, which reflects how language is most commonly used. With these factors in mind, we developed a novel, question-answer paradigm manipulating inferential demand—whether a reply is conveyed directly or indirectly.

Previous studies investigated indirect speech comprehension with fMRI in healthy adults ( Shibata et al., 2011 ; Basnáková et al., 2013 ; Jang et al., 2013 ; Feng et al., 2017 ). Here, we use a patient lesion-model to examine the neurobiological basis of indirect speech. Specifically, we study patients with behavioral variant frontotemporal dementia (bvFTD), who constitute an ideal cohort to study deficits in “real world” communication ( Grossman, 2018 ). A young-onset neurodegenerative disease, bvFTD is characterized by changes in social comportment, personality, and executive function due to disease in frontal and temporal cortices. Importantly, while patients are grossly non-aphasic, they may show deficits at the discourse level of language. Previous research thus has demonstrated that bvFTD speech is marked by poor narrative organization and limited appreciation of global meaning, abnormal prosody, simplified grammatical structures, and a reliance on concrete concepts and literal meaning ( Ash et al., 2006 ; Farag et al., 2010 ; Charles et al., 2014 ; Cousins et al., 2017 ; Nevler et al., 2017 ). Moreover, to demonstrate that limited indirect speech comprehension cannot be easily attributed to the presence of any neurodegenerative disease, we examined non-aphasic patients with amnestic mild cognitive impairment (MCI). Patients with MCI show some cognitive decline but remain largely capable of independent day-to-day functioning ( Gauthier et al., 2010 ), and thus represent an appropriate brain-damaged control group to examine the specificity of an effect observed in bvFTD.

Based on previous work from our laboratory and others, we predict that non-aphasic bvFTD patients will show deficits in indirect speech related in part to disease in brain regions associated with an “extended language network” encompassing social, executive, and language regions ( Ferstl et al., 2008 ). We hypothesize further that critical white matter tracts linking these linguistic and extra-linguistic regions may also be disrupted in bvFTD, and interruption of white matter-mediated connectivity within the extended language network also may contribute to a limitation in indirect speech comprehension. While the initial WLG model posited only a single white matter tract for language—the arcuate fasciculus, connecting Broca’s and Wernicke’s areas—more recent work has begun to implicate multiple tracts, including the superior longitudinal fasciculus, inferior longitudinal fasciculus, and uncinate fasciculus ( Saur et al., 2008 ; Friederici, 2015 ; Vassal et al., 2016 ). It is these tracts that would permit the traditional language network to interact with extra-Sylvian regions – namely, the executive control and social brain networks that are hypothesized to play a role in discourse processing. Accordingly, and given that bvFTD is known to show significant WM disease ( Agosta et al., 2012 ), we adopt a multimodal approach and use a combination of high-resolution structural magnetic resonance imaging (sMRI) and diffusion tensor imaging (DTI) to expand our understanding of the neuroanatomic changes associated with real-world communication.

Materials and Methods

Participants.

Participants included 21 patients with bvFTD, 17 age and education-matched healthy controls, and 17 brain-damaged controls with amnestic MCI. See Table 1 for a summary of demographic and clinical characteristics. All patients (bvFTD, MCI) were diagnosed by board-certified neurologists (M.G. and D.J.I.) using published criteria and a consensus procedure ( Albert et al., 2011 ; Rascovsky et al., 2011 ). As some bvFTD patients may develop language deficits associated with semantic variant primary progressive aphasia (svPPA), any patients with symptomatic evidence of svPPA or a score greater or equal to 1 on the Language Supplement of the Clinical Dementia Rating Scale (CDR) ( Knopman et al., 2011 ) were excluded from the sample population. We note here that we chose MCI as our brain-damaged control group rather than svPPA since we wanted all patients to be non-aphasic and capable of performing the discourse task at a reasonable level of proficiency and without obvious language-related deficits. Alternative causes of cognitive difficulty (e.g., vascular dementia, hydrocephalus, stroke, head trauma, primary psychiatric disorders) were excluded by clinical exam, neuroimaging, CSF, and blood tests. As summarized in Table 1 , severity of overall cognitive impairment was assessed in patients using the Mini-Mental State Examination (MMSE). On average, patient scores fell in the mild range. Healthy control subjects were verified through negative self-report of a neurological and psychiatric history and a score of greater than or equal to 28 on MMSE. All subjects were recruited from the Penn Frontotemporal Degeneration Center and gave informed consent according to a protocol approved by the Institutional Review Board at the University of Pennsylvania.

www.frontiersin.org

Table 1. Mean (±SD) of group demographic characteristics 1 .

Experimental Design and Statistical Analyses

The stimulus materials consisted of 120 question-answer dialogs (60 experimental items and 60 filler items of similar structure), summarized in Table 2 . They were of the form: “Do you want some cake for dessert?” “I’m on a very strict diet right now,” All questions were polar, such that the expected answer was either “yes” or “no.” Stimuli were presented as printed text in order to avoid any confounds introduced by prosodic cutes inherent in the speech stream or limited working memory.

www.frontiersin.org

Table 2. Sample Stimulus Materials.

Each question ( n = 30) was associated with two different replies, which systematically varied according to inferential demand (direct, indirect). The 60 filler items used the same questions, but presented both the indirect and indirect replies in succession (30 provided the direct reply first, and 30 provided indirect reply first). The filler items will not be discussed further here. Table 2 illustrates each condition and sample stimuli. Note that indirect replies, as operationalized here, are equivalent to Grice’s notion of “conversational implicatures” ( Grice, 1975 ).

Stimuli were carefully constructed to minimize linguistic variation within and across conditions. The direct and indirect items were matched within each item for number of syllables, mean word frequency ( Brysbaert and New, 2009 ), and mean concreteness ( Brysbaert et al., 2014 ). For word frequency and concreteness, a mean score was generated for each sentence by averaging across the individual scores of each content word. This careful matching procedure is meant to ensure that any differences in processing direct and indirect items are due to the manipulation of inferential demand, and not to any differences in other linguistic properties.

Stimulus presentation, timing, and responses were controlled via E-Prime presentation software. On each trial, a fixation cross was presented (3 s), followed by the question (3 s), and then reply (3 s). The question remained on the screen as the reply appeared, in order to reduce any working memory demands. Following each trial, subjects were presented a probe: “Does the reply mean yes or no?” and given 10 s to respond via button press. Response accuracy and response time were recorded for each condition. Items were counterbalanced so that half the replies had a positive connotation (i.e., mean “yes”) and half the replies had a negative connotation (i.e., mean “no”). Participants were trained prior to testing and completed 12 practice trials. In total, task administration took approximately one hour.

We assessed performance using two independent metrics: response accuracy and reaction time, as well as two derivative measures: an impairment score and a slowing score. The impairment score, which was meant to quantify a patient’s degree of impairment in indirect speech processing specifically, was calculated by subtracting accuracy in the direct condition from accuracy in the indirect condition within each individual subject (impairment score per subject = indirect accuracy – direct accuracy). The slowing score is an analogous measure for reaction time (slowing score = indirect reaction time – direct reaction time). All analyses used non-parametric statistics as the data were not normally distributed according to Shapiro–Wilks tests. Between-group comparisons were performed with Mann-Whitney tests, and within-group comparisons with Wilcoxon tests. Correlations were calculated using the Spearman method. All statistical analyses were performed in R 1 .

Prior to data collection, stimulus validity was confirmed via pre-testing. In a norming study, healthy, young adult subjects ( n = 10) were asked to read each dialog and respond to a series of question via button press. As in the main experiment, subjects were first asked to indicate if the reply meant “yes” or “no”. Next, subjects were asked to rate how direct the reply sounded and how natural the dialog sounded, both on a scale of 1 to 5 (where 1 = very direct/natural and 5 = very indirect/unnatural). Overall, subjects performed at ceiling, with a mean (±S.D.) accuracy of 97.87% (±0.05) across all items. Furthermore, there was no significant difference for accuracy [direct = 96.88 (±0.02), indirect = 98.63 (±0.01); p = 0.07] or naturalness [direct = 1.45 (±0.64), indirect = 1.87 (±0.51); p = 0.12], in the direct and indirect conditions. Importantly, there was a significant difference between stimuli in terms of the directness rating [direct = 1.19 (±0.13), indirect = 3.50 (±0.98); p = 0.00003].

Neuropsychological Battery

In order to assess the potential contribution of linguistic and non-linguistic cognitive processes to speech act comprehension, both bvFTD and MCI patient groups were also administered a comprehensive neuropsychological battery. Language was assessed with 3 measures, each representing a different level of language processing. Phonological awareness was assessed with the Repetition score from the Philadelphia Brief Assessment of Cognition (PBAC) ( Libon et al., 2011 ), and semantic knowledge with the Multi-Lingual Naming Test (MINT) ( Gollan et al., 2012 ). Finally, grammatical comprehension was assessed using a two-alternative forced-choice sentence-picture matching task, which yields a ratio score comparing comprehension of object-relative sentences to subject-relative sentences ( Charles et al., 2014 ).

Next, executive function was assessed with backward digit span (BDS) ( Wechsler, 1997 ), a test of working memory which requires subjects to repeat an orally presented sequence of numbers in reverse order, and Trailmaking Test B (TMT) ( Reitan, 1958 ), a test of mental flexibility in which subjects must connect a series of dots in ascending order, alternating between letters (A-K) and numbers (1-12). The time to complete Trailmaking Test B (in seconds) was normalized to each subjects time to complete Trailmaking Test A (in which only numbers are presented and there is no switching involved), in order to control for any potential motor differences across subjects.

Social cognition was assessed with the Social Norms Questionnaire (SNQ), a 22-item questionnaire probing social knowledge and an individual’s ability to use context to decide when a behavior is or is not socially appropriate ( Panchal et al., 2015 ). A higher score on the SNQ indicates greater knowledge of social norms. Scoring of the SNQ also yields two subscores: an “Overadhere” score, which refers to the endorsement of socially appropriate behavior as inappropriate (e.g., wearing the same shirt twice in 2 weeks), and a “Break” score, which refers to endorsement of a socially inappropriate behavior as appropriate (e.g., hugging a stranger without asking first). A caregiver informant also completed the Perception of Conversation Index (PCI). Section 1 of the questionnaire assesses caregiver perception of conversational difficulties in patients and includes questions such as “Has difficulty with telephone conversations,” and “Mixes-up the details while telling a story” ( Orange et al., 2009 ; Savundranayagam and Orange, 2011 ).

We also collected measures in two unrelated cognitive domains to serve as negative controls: visuospatial functioning and episodic memory. Both of these abilities are typically relatively preserved in bvFTD. To assess visuospatial functioning, we used the “copy” measure of Rey-Osterreith Complex Figure Test ( Libon et al., 2011 ), in which a subject must copy a complicated geometric line drawing freehand, and Judgment of Line Orientation (JOLO), in which subjects to match an angled line to one of 11 lines that are arranged in a semicircle ( Benton et al., 1983 ). Finally, to assess episodic memory, we used the “recall” measure of the Rey-Osterreith Complex Figure Test ( Libon et al., 2011 ), where a subject must draw the same complicated line drawing from memory, after a delay. We also assessed episodic memory with Philadelphia Verbal Learning Test ( Libon et al., 1996 ), which is a 9-item list-learning task modeled after the California Verbal Learning Test. The number of correct items recalled on Trial 7 was used as the dependent variable here.

Structural Imaging: Methods and Analysis

Image acquisition.

High-resolution volumetric T1-weighted structural magnetic resonance imaging (MRI) was collected for 19 bvFTD patients and an independent cohort of 25 healthy age- and education-matched controls from the surrounding community (mean age = 67.23 (±7.46), p = 0.37; mean education = 15.88 (±2.19) p = 0.22). These controls were used to define an average template brain of comparable age that can be used to identify regions of significant gray matter disease in patients, on a voxel by voxel basis. A T1 image was not available for two patients with bvFTD due to contraindications and safety concerns, including claustrophobia and metal in the body (i.e., pacemaker). MRI volumes were acquired using a magnetization prepared rapid acquisition with gradient echo (MPRAGE) sequence from a SIEMENS 3.0T Tim Trio scanner using an axially acquired protocol with the following acquisition parameters: repetition time (TR) = 1620 ms; echo time (TE) = 3.87 ms; slice thickness = 1.0 mm; flip angle = 15°; matrix = 192 × 256, 160 slices, and in-plane resolution = 0.9766 × 0.9766 mm 2 . Whole-brain MRI volumes were preprocessed using Advanced Normalization Tools 2 using the state-of-the-art antsCorticalThickness pipeline described previously ( Avants et al., 2008 ; Klein et al., 2010 ; Tustison et al., 2014 ). Briefly, processing begins by deforming each individual dataset into a standard local template space that uses a canonical stereotactic coordinate system, generated using a subset of images from the open access series of imaging studies dataset (OASIS) ( Marcus et al., 2010 ). ANTs then applies a highly accurate registration algorithm using symmetric and topology-preserving diffeomorphic deformations, which minimize bias to the reference space while still capturing the deformation necessary to aggregate images in common space. The ANTs Atropos tool uses template-based priors to segment images into six tissue classes (cortex, white matter, CSF, subcortical gray structures, brainstem, and cerebellum) and generate corresponding probability maps. Voxelwise cortical thickness is finally measured in millimeters (mm). Resulting images are warped into Montreal Neurological Institute (MNI) space, smoothed using a 2 sigma smoothing kernel, and downsampled to 2mm isotropic voxels.

Voxel-Wise Analyses

To define areas of significant cortical thinning in bvFTD, non-parametric, permutation-based imaging analyses were performed with threshold-free cluster enhancement ( Smith and Nichols, 2009 ) and the randomize tool in FSL 3 . Briefly, permutation-based t -tests evaluate a true assignment of cortical thickness values across groups (signal) relative to many (e.g., 10,000) random assignments (noise). Accordingly, permutation-based statistical testing is robust to concerns regarding multiple comparisons and preferred over traditional methods using parametric-based t-tests as permutation testing effectively controls for false positives ( Winkler et al., 2014 ). Cortical thickness was compared in patients relative to the independent cohort of 25 healthy controls described above and restricted to an explicit mask of high probability cortex (> 0.4). We report clusters that survived a statistical threshold of p < 0.01, correcting for multiple comparisons using the family wise error (FWE) rate relative to 10,000 random permutations. Results were projected onto the Conte69 surface-based atlas using Connectome Workbench 4 .

To relate behavioral performance to regions of significant cortical thinning, we fit linear regression models with the randomize tool of FSL and the impairment score as a covariate. Permutations were run exhaustively up to a maximum of 10,000 for each analysis. To constrain our interpretation to areas of known disease, we restricted our regression analyses to an explicit mask containing voxels of significant cortical thinning, as defined by the group comparison described above. Results outside these regions of known disease would be difficult to interpret since they could be attributed to a variety of individual differences unrelated to disease per se (e.g., healthy aging, genetic variation, etc.). For the regression analyses, we report clusters with a minimum of 20 adjacent voxels and surviving a height threshold of p < 0.005, which is recommended for optimal balance of Type I and Type II error rates ( Lieberman and Cunningham, 2009 ). Results were projected onto slices using MRIcron software ( Rorden and Brett, 2000 ).

ROI Analyses

Next, we conducted a series of whole-brain region-of-interest (ROI) analyses in order to specifically test our hypothesis that indirect reply comprehension involves the interaction of multiple brain networks: the core language network, the theory-of-mind/social network, and the multiple-demand network/executive network. Using publicly available software 5 , we extracted mean cortical thickness values for each of the 3 networks for each subject. Each network ROI (see below for network ROI definitions) was warped from MNI space to the subject’s native T1 space prior to extracting the estimates of cortical thickness (mm). To demonstrate specificity of our predicted relationship, we similarly extracted cortical thickness estimates from the sensorimotor network to use as a negative control.

The language network ROI was constructed by summing 4 language ROIs identified by Fedorenko et al. (2010 , 2013) , who used a language localizer contrasting reading sentences to reading lists of unconnected, but pronounceable words. The final ROIs, which included left IFG, IFG (pars orbitalis), anterior temporal lobe, and posterior temporal lobe, were created from a probabilistic overlap map from 220 healthy participants.

The social network was the sum of 7 ROIs, which were originally constructed by Dufour et al. (2013) and included: dorsomedial prefrontal cortex (dmPFC), middle medial prefrontal cortex (mmPFC), ventromedial prefrontal cortex (vmPFC), precuneus (PC), right superior temporal sulcus (RSTS), right temporoparietal junction (RTPJ), and left temporoparietal junction (LTPJ). The ROIs were developed by contrasting the false belief and false photograph conditions of a standard story-based theory-of-mind task across 462 healthy participants.

The executive ROIs were also adopted from Fedorenko et al. (2013) , who contrasted hard and easy versions of a spatial working memory task in 197 healthy participants. For our purposes, we summed only those ROIs overlapping the so-called “fronto-parietal attention network.” The ROIs we selected thus included dorsolateral prefrontal cortex, orbital middle frontal gyrus, bilateral superior parietal lobe, inferior parietal sulcus, and inferior parietal lobule.

Finally, our control sensorimotor network was taken from Shirer et al. (2012) , who defined ninety functional ROIs across 14 large-scale resting state brain networks using a classifier with leave-one-out cross-validation.

Diffusion Tensor Imaging: Procedure and Analysis

White matter tracts play a critical role in network activity by transmitting electrical signals across spatially separate gray matter regions, both within and across hemispheres. Therefore, even when gray matter regions are intact, synchronized network activity can be disrupted if there is damage to the white matter projections connecting gray matter nodes to each other. Because of this possibility, we use diffusion tensor imaging to examine patterns of structural connectivity in bvFTD and build a large-scale, multimodal network underlying speech act comprehension.

Diffusion weighted imaging (DWI) was available for the same 19 bvFTD patients with T1 imaging. A 30-directional DWI sequence was collected using single-shot, spin-echo, diffusion-weighted echo planar imaging (FOV = 240 mm; matrix size = 128 × 128; number of slices = 70; voxel size = 1.875 × 1.875 × 2.2 mm3, TR = 8100 ms; TE = 83 ms; fat saturation). Thirty volumes with diffusion weight ( b = 1000 s/mm2) were collected along 30 non-collinear directions, and either one or five volumes without diffusion weight ( b = 0 s/mm2) were collected per subject. A chi-square test demonstrated that the distribution of subjects with either one or five volumes without diffusion weight did not differ significantly across our groups (X 2 = 5.4412, p = 0.08). We also include a nuisance covariate for sequence in our subsequent analyses.

The diffusion images were processed using ANTs ( Tustison et al., 2014 ) and Camino ( Cook et al., 2006 ). Motion and distortion artifacts were removed using affine co-registration of each diffusion-weighted image to the average of the unweighted ( b = 0) images. Diffusion tensors were calculated using a weighted linear least-squares algorithm ( Salvador et al., 2005 ) implemented in Camino. Fractional anisotropy (FA) was computed in each voxel from the DT image, and distortion between the subject’s T1 and DT image was corrected by registering the FA to the T1 image. DTs were then relocated to the local template for statistical analysis by applying the FA-to-T1, T1-to-local template, and local template-to-MNI warps, and tensors were reoriented using the preservation of principal direction algorithm ( Alexander et al., 2002 ). Each participant’s FA image was recomputed from the DT image in MNI152 template space and smoothed using a 2-sigma smoothing kernel.

Like the pipeline for GM analysis, we used the randomize tool in FSL to compare FA in patients relative to the same cohort healthy age-matched controls. The two-sample t-test of patients vs. controls was run with 10,000 permutations and restricted to voxels containing WM based on an explicit mask of high probability WM (minimum FA considered WM = 0.20). We also include a nuisance covariate of no interest for sequence difference (sequences with one versus five volumes without diffusion weight). We report clusters that survived a statistical threshold of p < 0.005 and a minimum cluster extent of 200 voxels. Regression analyses then related patient impairment to reduced FA, using a covariate for the indirect impairment score and a nuisance covariate for sequence. These regressions were restricted to the results of the previous analysis—that is, only voxels showing a significant effect of group. As above, we report clusters surviving a height threshold of p < 0.005 and a minimum of 20 contiguous voxels.

Data Availability

The data for the study are available from the authors to qualified investigators with appropriate Institutional Review Board approval and Material Transfer Agreement.

Analysis of Task Performance

Our first objective was to test the hypothesis that inferential demand (i.e., whether a reply was communicated directly versus indirectly) modulates response accuracy in bvFTD. Results are summarized in Figure 1 . We find that healthy control subjects performed at ceiling in both direct and indirect conditions, with no significant difference between conditions ( W = 23.5, p = 0.10). Patients with bvFTD, on the other hand, perform significantly worse in the indirect condition than the direct condition ( W = 132.5, p = 0.0008). bvFTD patients were also significantly impaired relative to healthy controls in the indirect condition ( W = 276.5, p = 0.003), but not the direct condition ( U = 237.5, p = 0.07). The null result in the direct condition suggests that segmental language ability, such as the comprehension of single words and sentences, is unlikely to be responsible for the decrement in indirect performance. To confirm the group-level results in bvFTD, we also calculated an “impairment score” by subtracting accuracy in the direct condition from accuracy in the indirect condition within each individual subject (impairment score = indirect accuracy – direct accuracy). Accordingly, more negative scores represent a greater degree of impairment. Results again indicate that bvFTD patients [mean impairment = −0.08 (±0.09)] are significantly more impaired than healthy controls (mean impairment = −0.01 (±0.02), U = 273.00, p = 0.004). Sixteen of 21 (76.20%) bvFTD patients showed a negative impairment score, suggesting that an impairment of indirect speech comprehension is not an uncommon finding in bvFTD.

www.frontiersin.org

Figure 1. Response accuracy: Response accuracy in controls, patients with behavioral variant frontotemporal dementia (bvFTD), and patients with amnestic mild cognitive impairment (MCI) in the experimental (short) conditions. (A) Mean (±SE) accuracy in the direct and indirect conditions. Controls are shown in dark gray (left-most bar), bvFTD patients in medium gray (middle bar), and MCI patients in light gray (right bar). (B) Mean (±SEM) impairment score (indirect – direct) across groups. A more negative impairment score indicates more difficulty with the indirect condition relative to a patient’s individual baseline performance on the direct condition. * indicates significance at p < 0.05, ** indicates significance at p < 0.01, *** indicates significance at p < 0.001.

Consider next the brain-damaged control group used to examine the specificity of the effect observed in bvFTD. Results in MCI show that, unlike bvFTD patients, MCI patients are not significantly impaired relative to healthy controls in either the direct ( U = 165.50, p = 0.43) or indirect condition ( U = 170.00, p = 0.36). Similarly, their mean impairment score [−0.006 (±0.06)], calculated within each individual, does not differ from that of healthy controls (U = 171.00, p = 0.35), but does differ from bvFTD ( U = 110.00, p = 0.04) (please see Figure 1 ). Because bvFTD and MCI patients are matched in terms of global cognition as assessed by the MMSE ( U = 107.5, p = 0.93), it is difficult to attribute the relative impairment observed in bvFTD entirely to an effect of overall cognitive impairment, but is more likely to be associated with the deficits characteristic of bvFTD.

The reaction time data offer converging evidence for our claim that patients with bvFTD are selectively impaired in indirect reply comprehension, relative to both healthy controls and patients with MCI. A Kruskal–Wallis test indicates that there are significant differences across our three groups for reaction time in both the direct condition (χ 2 (2) = 10.97, p = 0.001) and the indirect condition (χ 2 (2) = 13.75, p = 0.001). Upon further analysis, we find that patients with bvFTD are significantly slower to respond to direct replies than healthy controls ( U = 81.5, p = 0.004), but not MCI ( U = 155.00, p = 0.5). More importantly, in the indirect condition, bvFTD are slower to respond than both groups (controls: U = 67.5, p = 0.001; MCI: U = 108.00, p = 0.038). These data, however, do not address whether bvFTD patients have slower, non-specific motor reaction times or are more affected by the increased inferential demand characteristic of the indirect condition relative to the two other subject groups. To answer this question, we computed an individualized “slowing score” (slowing score = indirect RT – direct RT), analogous to the impairment score calculated for accuracy. In this case, a positive slowing score means a subject is relatively slower in the indirect condition. We find a significant difference in slowing scores across our three groups (χ 2 = 9.30, p = 0.001). Post hoc testing indicates that patients with bvFTD have significantly larger slowing scores than healthy controls ( U = 81.5, p = 0.005) and MCI ( U = 99.00, p = 0.019) (please see Figure 2 ). Therefore, the disproportionate slowing for indirect compared to direct stimuli in bvFTD suggests that our observations cannot be easily attributed to simple motor slowing. Moreover, this finding demonstrates that patients with bvFTD do not slow their performance in a strategic effort to improve accuracy. Taken together, our data confirm that patients with bvFTD struggle to process indirect replies during conversation both quickly and accurately.

www.frontiersin.org

Figure 2. Response latency: Response latency in controls, patients with behavioral variant frontotemporal dementia (bvFTD), and patients with amnestic mild cognitive impairment (MCI). (A) Mean (±SE) reaction time in the direct and indirect conditions. Controls are shown in dark gray (leftmost bar), bvFTD patients in medium gray (middle bar), and MCI patients in light gray (right bar). (B) Mean (±SE) slowing score across groups. A higher slowing score indicates longer reaction times in the indirect condition relative to a patient’s individual baseline performance in the direct condition. * indicates significance at p < 0.05, ** indicates significance at p < 0.01, *** indicates significance at p < 0.001.

Correlational Analyses With Neuropsychological Measures

Next, to examine the cognitive mechanism(s) associated with the observed deficits in bvFTD patients, we administered a broad neuropsychological battery targeting core language skills, executive function, and social cognition that may contribute to inferential comprehension, as well as negative control measures of visuospatial functioning and episodic memory. We used Spearman correlations to relate these independent measures to the indirect-relative-to-direct inferential impairment score.

Our first aim is to demonstrate that indirect speech act comprehension impairment in bvFTD discourse is largely independent of segmental language ability. Consistent with our earlier finding of intact performance in the direct condition, correlation analyses indicate that language ability at both the phonological (i.e., repetition test) and single word (i.e., MINT) levels is not related to inferential impairment (please see Table 3 ). A measure of grammatical comprehension, however, comparing comprehension of object-relative sentences to subject-relative sentence, is significantly correlated with inferential impairment (rho = 0.52, p = 0.04).

www.frontiersin.org

Table 3. Correlation Results 1 .

Next, although patients with bvFTD are known to have deficits in working memory capacity ( Kramer et al., 2003 ; Libon et al., 2007 ; Baez et al., 2016 ), we find no relationship between backward digit span and inferential impairment. Other domains of executive function, however, do demonstrate an effect: poor task-switching ability (as indicated by Trailmaking) is correlated with inferential impairment (Trailmaking: rho = −0.63, p = 0.006), suggesting a role for mental flexibility in the interpretation of indirect replies.

In the social domain, the impairment score is also positively associated with total score of the SNQ (rho = 0.47, p = 0.04). Upon further examination, we find that most patients performing worse on the SNQ have a higher Overadhere score than Break score [Overadhere: mean = 1.95 (±1.47); Break: mean = 1.05 (±1.35)], suggesting that patients who are more rigid in their application of rules to behavior may be similarly rigid in their interpretation of discourse. Finally, we also confirm the construct validity of our indirect speech task by demonstrating that impairment on the task is correlated with real-world conversational difficulties, as assessed by caregivers in the PCI-DAT (rho = 0.49, p = 0.02).

bvFTD performance in the indirect condition is not related to visuospatial or episodic memory functioning. The same correlation analyses are also performed in patients with MCI in order to test the specificity of the results in bvFTD, and no results in MCI are significant. In sum, we conclude that a relative impairment in understanding indirect speech is specific to bvFTD and related to the social and executive deficits that characterize the disease. More specifically, we implicate the ability to adapt behavior to changing rules and/or contexts in the interpretation of indirect speech.

Based on these initial correlation results, we then used multiple linear regression to predict the impairment score based on three significant and possibly interacting variables, one from each domain: grammatical comprehension, Trailmaking (B-A), and SNQ. A total of 5 different models were tested: all variables as independent (Model 1); all variables interacting (Model 2); and each of the pairwise interactions (the remaining predictor as independent, Models 3-5). Only one model yields a significant regression equation [(Impairment Score ∼ Grammatical Comprehension + Trailmaking ∗ SNQ); F (4,13) = 9.346, p = 0.006]. The overall model fit is strong, with R 2 = 0.84. Both Trailmaking (β = 0.009, p = 0.004) and SNQ (β = 0.067, p = 0.002) are significant independent predictors of the impairment score, along with their interaction (β = −0.005, p = 0.004), while grammatical comprehension is not a significant independent predictor (β = 0.11, p = 0.19). The results of this analysis suggest that social cognition and executive functioning interact with one another and may play a role in the interpretation of indirect speech in bvFTD.

Neuroimaging Analyses

We also sought to determine the neuroanatomic basis of indirect speech act comprehension. More specifically, we examined regions of gray and white matter disease that may be related to impaired indirect speech act performance in patients with bvFTD. We note here that we focus solely on bvFTD patients in the following analyses because patients with MCI showed no impairment in the indirect condition, which is our experimental condition of interest.

We first contrasted cortical thickness in patients with bvFTD relative to an independent cohort of age-matched healthy controls. As illustrated in Figure 3 and summarized in Table 4 , this reveals significantly reduced cortical thickness throughout the frontal and anterior temporal lobes bilaterally in bvFTD, with a peak in orbitofrontal cortex, consistent with disease diagnosis and previous structural imaging studies ( Rascovsky et al., 2011 ; Möller et al., 2016 ).

www.frontiersin.org

Figure 3. Structural Neuroimaging Results: (A) Surface renderings depicting regions of significant cortical thinning in behavioral variant frontotemporal dementia (bvFTD) patients relative to age-matched healthy controls. Heat map intensity refers to t -statistic values. (B) Axial slices and z -axis coordinates illustrating regions of significant cortical thinning in bvFTD patients relative to age-matched healthy controls (red and blue regions) and regions of significant cortical thinning associated with indirect comprehension impairment in bvFTD (red areas, only).

www.frontiersin.org

Table 4. MNI Coordinates for Structural Imaging Results.

Next, to relate patient deficits in indirect speech act comprehension to gray matter disease, we performed a regression analysis using the impairment score (indirect – direct) as a covariate. We find that greater relative impairment in the indirect condition is related to reduced cortical thickness in a largely left-lateralized cortical network, spanning frontal, temporal, and parietal regions. Significant clusters are observed within the classic peri-Sylvian language network, including left inferior frontal gyrus and posterior middle to superior temporal gyri, as well as right inferior frontal gyrus (pars opercularis). Additional effects are seen in regions that are more traditionally associated with social cognition, including medial prefrontal cortex, orbitofrontal cortex, and precuneus; and with executive function, including dorsolateral prefrontal cortex. Although unpredicted, we also see significant associations with premotor cortex, precentral gyrus, and supplementary motor areas, which have been previously implicated as part of the multiple-demand network and thought to play a role in broad domain-general functions ( Fedorenko et al., 2013 ).

Our next set of analyses tested our hypothesis that three primary networks (language, social, and executive) are related to indirect speech comprehension by computing linear models using the mean cortical thickness score for each network as predictors for the impairment score. We did this by using a ROI-based approach across the whole-brain, rather than a voxel-wise approach. Using the network ROIs associated with language, social, and executive function defined in the Methods section, we find significant effects for each of our three networks, as shown in Figure 4 . This effect is specific to these 3 networks and is not observed in the negative control, sensorimotor network.

www.frontiersin.org

Figure 4. Network Analyses: (A) Network Key. Surface renderings of the brain showing each of the 4 network ROIs tested for their relationship with indirect speech processing: language network (green), social network (blue), executive network (yellow), and sensorimotor network (red). See text for a description of how each network was defined. (B) Network associations with Indirect comprehension impairment. Graphs plot the relationships between network cortical thickness in behavioral variant frontotemporal dementia (bvFTD) and indirect impairment score for language, executive, social, and sensorimotor networks. Note that the sensorimotor network is included as a negative control network to demonstrate specificity. See bottom right corner of each plot for R 2 -values.

Finally, while the majority of previous work on language comprehension has focused primarily on gray matter contributions to processing, we adopt a more connectionist approach here. Using a voxel-wise approach, we observe a significant change in FA in the following tracts within bvFTD: uncinate fasciculus, superior and inferior longitudinal fasciculi, and inferior fronto-occipital fasciculus. These are all long-range association tracts. We also observed disease in the corpus callosum, as well as white matter of the middle frontal and temporal gyri. We next examined which of these tracts are associated with the (indirect – direct) impairment score in bvFTD. As illustrated in Figure 5 and summarized in Table 5 , we find significant effects for the superior longitudinal fasciculus (typically implicated in language processing), as well as the uncinate fasciculus (typically implicated in social-behavioral functioning), and inferior fronto-occipital fasciculus and frontal aslant.

www.frontiersin.org

Figure 5. White Matter Imaging Results: (A) Axial slices showing regions of significantly reduced fractional anisotropy (FA) in white matter (WM) of behavioral variant frontotemporal dementia (bvFTD) patients relative to age-matched healthy controls (blue), regions of significantly reduced FA related to indirect impairment (red), and ancillary white matter regions (outside of blue regions of disease) also related to indirect impairment (violet). See key in upper right hand corner.

www.frontiersin.org

Table 5. MNI Coordinates for Diffusion Tensor Imaging Results.

Most listeners are exceedingly adept at decoding a speaker’s intended meaning, despite the ambiguity inherent in much conversational speech. An unresolved question in neuroscience is how the brain accomplishes this feat. To address this issue, we study speech act processing in non-aphasic patients with bvFTD, and demonstrate that their comprehension is most impaired when a speaker’s intended meaning is communicated indirectly. This is related in part to neuropsychological measures of social and executive functioning and grammatical comprehension. Moreover, the observed impairment in indirect speech act comprehension is related to disease not only in the traditional language network (including left IFG and pMTG/STG), but also in two additional networks: the social brain network (including mPFC, OFC, and precuneus) and the executive network (including DLPFC, premotor cortex, and supplementary motor area), as well as the long-tract white matter projections that integrate these networks. Therefore, while traditional models of language highlight a left peri-Sylvian network, our observations are consistent with the hypothesis that the highly common but often overlooked inferential component of conversational speech is supported in part an extended language network. This network also incorporates frontal and parietal association cortices well beyond traditional left peri-Sylvian language regions. We discuss these findings and their implications below.

Inferential Demand Modulates Language Comprehension

Our primary objective was to examine how inferential demand—whether a speaker’s message is communicated directly or indirectly—modulates comprehension. Analyses of patient performance based on both accuracy and reaction time metrics suggest a deficit in indirect speech act comprehension in bvFTD. Findings of reduced accuracy extend the social-executive deficit in bvFTD to the language domain. We are unaware of other studies of indirect speech act comprehension in bvFTD, although clinical observations of schizophrenia, autism, and traumatic brain injury suggest difficulties with indirect speech exist in these populations ( Champagne-Lavau and Stip, 2010 ; Johnson and Turkstra, 2012 ; Pastor-Cerezuela et al., 2018 ). Even for items where comprehension is preserved, moreover, we find significantly slowed performance. Previous studies have also reported that reaction time increases along with higher inferential demand ( Ferstl and von Cramon, 2002 ; Kuperberg et al., 2006 ; Siebörger et al., 2007 ), and our findings suggest that slowing during a communication task that depends in part on inferencing is even greater in patients with bvFTD. Slowed processing can have considerable effects on real-world communication, as the gap between “turns at talk” is typically on the order of 200-250ms ( Stivers et al., 2009 ; Levinson, 2016 ). In our data, bvFTD patients show a slowing effect of ∼600 ms: such a processing lag would obviously impede the rapid exchanges that characterize human conversation.

We further demonstrate that our effects are specific to bvFTD and not observable in brain-damaged controls with MCI. While evidence suggests that pragmatic deficits such as proverb interpretation may exist in MCI ( Leyhe et al., 2011 ; Cardoso et al., 2014 ), such findings may be a consequence of experimental confounds related to stimulus length or “frozen” meanings, and potentially associating findings with impaired episodic memory retrieval rather than impaired inferential processing. More work is needed to investigate this possibility.

Next, we examined the cognitive mechanisms that may contribute to indirect speech comprehension by evaluating the results of a neuropsychological battery. Findings indicate that the observed deficit in bvFTD may be multifactorial in nature, as the indirect speech impairment (indirect – direct) score is related in part to language, social, and executive functioning, but not to episodic memory or visuospatial functioning.

Consider first executive function, which was assessed by Trailmaking. This finding aligns well with previous research showing a relationship between mental flexibility and pragmatic competence ( Eslinger et al., 2007 ; Torralva et al., 2015 ). For example, Torralva et al. (2015) demonstrated that cognitive theory of mind and the ability to infer a speaker’s intention in a faux pas task is related to Trailmaking performance, although this finding is confounded in part by the lengthy nature of the stimulus items. While using a narrative helps establish context, as in most theory of mind stimulus materials, this can also increase executive demands and introduce carry-over effects that make it difficult to dissociate inferential processing from task-related components of narrative processing, such as working memory demands needed to maintain narrative elements and track a character over time. Other results emphasize this potential confound by suggesting that working memory capacity may predict inference revision ability ( Tompkins et al., 1994 ; Wright and Newhoff, 2002 ; Pérez et al., 2014 ). In our experiment, we suggest that bvFTD patients struggle to infer a speaker’s intention, perhaps related to difficulty switching from a literal to a pragmatic interpretation of utterance meaning. Our results are less likely to be confounded by working memory demands due to the brief stimulus items and the availability of written stimuli during the entire procedure. Indeed, although working memory is decreased in bvFTD ( Kramer et al., 2003 ), we find little relationship between indirect speech act comprehension impairment and digit span. Future work using auditory stimuli should further investigate working memory contributions to language comprehension.

We also report a positive association between the indirect speech impairment score and performance on SNQ—a questionnaire assessing an individual’s ability to apply socially dictated rules given different constraints (e.g., a conversation with a stranger versus a friend). One important social norm for conversational exchanges is Grice’s “Maxim of Relevance,” which states that an individual’s contribution to an ongoing exchange should always be pertinent and on-topic. bvFTD subjects may fail to appreciate this maxim due to degraded social knowledge, and can be observed clinically to respond to questions with tangential replies. Thus, they may judge indirect speech as irrelevant to an ongoing exchange and disregard it accordingly, ultimately resulting in impaired comprehension, as observed here. Task demands also depend in part on meta-judgments, that is, the patient’s judgment of whether a reply means “yes” or “no.” This kind of judgment in a controlled experimental context may differ from judgments in a real-world context where there is also additional contextual support for responding to an indirect request. While indirect responses to some requests may be overlearned (e.g., Question: “Do you have the time?” Response: “Noontime.”) relative to a direct response (Question: “Do you have the time?” Response: “Yes.”), the clinical impression is that questions eliciting an indirect response tend to yield a direct or literal response from bvFTD patients more often than would ordinarily be expected. Additional work is needed in a real-world setting to gauge the extent of an indirect speech act comprehension deficit in bvFTD.

Multiple regression analysis confirms the role that executive function and social cognition play in impairment. The final model (Impairment Score ∼ Grammatical Comprehension + Trailmaking ∗ SNQ) also suggests that social and executive deficits are not independent, but rather interact. This result has implications for an ongoing debate in the bvFTD literature concerning the relationship between social cognition and executive function ( Lough et al., 2001 ; Eslinger et al., 2007 ; Le Bouc et al., 2012 ; Bertoux et al., 2016 ).

Although patients with bvFTD are grossly non-aphasic according to clinician assessments of speech, their indirect impairment is associated in part with a language measure—grammatical comprehension. In this case, grammatical comprehension was assessed by comparing sentence-picture matching for grammatically complex object-relative sentences compared to subject-relative sentences. We note here that the comprehension of object-relative sentences is known to be more difficult than subject-relative sentences in both healthy adults and patients with bvFTD ( Charles et al., 2014 ; Demberg and Sayeed, 2016 ). This positive association may be related in part to the mental manipulation of linguistic materials that may contribute to comprehension of both object-related sentences and indirect speech acts. We also note here that the relationship between grammatical comprehension and indirect speech act comprehension impairment is lessened when concomitant deficits in social cognition and executive function are taken into account in our three-factor regression model. Additional work is needed to examine the nature of deficits in grammatical comprehension in bvFTD, and the potential contribution of such a deficit to the comprehension of indirect speech acts in these patients.

While we used carefully constructed and well-matched experimental materials to assess comprehension of indirect relative to direct speech acts, one shortcoming of our study is that performance depends on patients’ judgments of speech acts rather than their difficulty engaging in actual speech acts during day-to-day natural discourse. To mitigate this shortcoming, we related patients’ indirect speech act performance to caregivers’ judgments of communicative efficacy. We find that difficulty with indirect speech act comprehension is related to caregivers’ perception of impoverished day-to-day conversational ability. Ultimately, our goal is to help improve daily discourse in bvFTD in order to advance a patient’s quality of life and minimize health-related risks associated with limitations in conversational exchanges that frequently involve indirect speech acts. Consider a situation where a caregiver asks a loved one “Do you need me?”, and the patient with bvFTD responds “Yes.” A direct reply to this request, when an indirect response to a question is wished, for example, does not indicate that the patient is feeling chest pain as might be experienced in a myocardial infarct. Instead, the caregiver may be obliged in this situation to make an inference based on the patient’s facial expression of pain. It is exceedingly common that a caregiver infers a bvFTD patient’s needs based on information other than that most often available from a verbal response. Additional work is needed to confirm the consequences of impaired indirect speech act comprehension in natural, day-to-day conversation in bvFTD.

An Extra-Sylvian Network for Speech Act Comprehension

Although neurobiological models of language centered on left peri-Sylvian regions have been foundational in studies of human brain functioning, these models remain limited in their external validity and generalizability to real-world contexts ( Hasson et al., 2018 ). Here, we examine cortical thinning and fractional anisotropy in patients with bvFTD, and begin to build a large-scale, multimodal language network that can potentially account for some very common aspects of real-world discourse such as indirect speech act comprehension.

To date, only a limited number of studies has examined the neural basis of indirect reply comprehension ( Shibata et al., 2011 ; Basnáková et al., 2013 ; Jang et al., 2013 ; Feng et al., 2017 ). While these fMRI studies offer preliminary evidence for the role of non-language regions including mPFC, TPJ, and precuneus in discourse processing, there are some caveats to keep in mind. For example, several studies used experimental tasks that involve reading a brief narrative followed by an exchange between speakers. Using narratives to establish context can increase executive demands and introduce carry-over effects that make it difficult to dissociate inferential processing from other task-related components of narrative processing—including tracking a character over time, processing event structure, maintaining narrative elements in working memory, and more. To help manage this potential confound, we designed a novel question-answer paradigm that manipulates inferential demand while simultaneously minimizing task-related resource demands and controlling for linguistic variation across stimuli. Moreover, fMRI studies may not fully control for all task-related demands.

As our paradigm was language-based, we did observe significant effects in the IFG and the posterior MTG/STG. These areas, initially proposed by the WLG model and later confirmed, constitute the primary nodes of the classic language network ( Binder et al., 1997 ; Price, 2000 ). In our study, we point out that these peri-Sylvian regions are related to patient impairment in the indirect condition over and above that found in the direct condition. Therefore, our results support a role for left peri-Sylvian regions not only in lexical, semantic, and syntactic processing, but also in higher-level selection and global integration that may be required in discourse, as suggested previously ( Hagoort, 2005 ). Previous fMRI studies of indirect speech and causal inferencing make similar arguments ( Mason and Just, 2004 ; Eviatar and Just, 2006 ). We also observe an effect in the right IFG, which is consistent with the dynamic spillover hypothesis described by Prat and colleagues ( Prat et al., 2011 ). According to this model, activity in the right hemisphere is more likely to be invoked when 1) readers are less skilled, and 2) passage difficulty is harder. It is important to point out that bilateral IFG also appears to contribute to other aspects of speech such as prosody, and prosody may signal a speech act that is indirect in nature. While this may be less relevant in the present study due to the availability of written stimuli, we showed elsewhere that disease in IFG is related to reduced pitch range in the speech of bvFTD patients ( Nevler et al., 2017 ). Additional work is needed to investigate the relative contributions of inferencing and prosodic processing in the limited comprehension of indirect speech acts during bvFTD patients’ natural conversation.

We now know that language processing also extends beyond peri-Sylvian regions ( Ferstl et al., 2008 ; Fedorenko and Thompson-Schill, 2014 ; Hagoort, 2014 ). We report here that extra-Sylvian regions, including the orbitofrontal, medial prefrontal, dorsolateral prefrontal cortices, as well as precuneus and premotor and supplementary motor regions, are related to indirect speech processing in bvFTD. These are regions encompassed by social and executive networks of the brain. Importantly, these findings are relatively selective, as we find no evidence of other network involvement (e.g., sensorimotor network).

Consider first mPFC and precuneus, which are both related to a social brain network commonly associated with “theory of mind” ( Saxe and Kanwisher, 2003 ; Frith and Frith, 2012 ; Dufour et al., 2013 ; Healey and Grossman, 2018 ). While mPFC is traditionally associated with perspective-taking and the ability to make inferences about conspecifics, recent research also suggests that a ventral portion of mPFC, similar to the cluster observed here, plays a role in scene construction and situational processing ( Lieberman et al., 2019 ). In the case of indirect speech acts, mPFC may help generate a “schema” or “situation model” that guides interpretation of ambiguous stimuli and events. The precuneus may play a similar role. One of the brain’s most globally connected areas, the precuneus is traditionally associated with a diverse set of cognitive functions including visuospatial processing, episodic memory ( Shallice et al., 1994 ), and mental imagery ( Hassabis et al., 2007 ; Johnson et al., 2007 ). Newer work, moreover, has demonstrated that the precuneus also plays a role in self-referential processing and first-person perspective-taking, as well as situation model-building and the retrieval of contextual associations from internal stores ( Lundstrom et al., 2005 ; Cavanna and Trimble, 2006 ; Binder et al., 2009 ; Mashal et al., 2014 ; Herold et al., 2016 ). Taken together, the relationship of mPFC and precuneus to indirect speech impairment suggests that indirect reply comprehension requires listeners to (1) adopt the speaker’s perspective, and (2) integrate contextual information into some kind of mental model. Finally, we also observed an effect in orbitofrontal cortex, which is sometimes included in the social brain network. Like mPFC and precuneus, some studies implicate OFC in theory of mind, in addition to tasks involving reversal learning, set-shifting, and affective decision-making ( Rolls, 2004 ; Sabbagh, 2004 ; Badre and Wagner, 2006 ).

DLPFC, on the other hand, is often considered to be part of a “multiple-demand” network commonly linked to domain-general, executive control processes that may be involved in language and other behaviors ( Novais-Santos et al., 2007 ; Duncan, 2010 ; Fedorenko et al., 2013 ). These regions are often defined by contrasting two task conditions that vary in difficulty (e.g., verbal working memory tasks with 4 digits versus 8 digits), mirroring our indirect-direct contrast. It is important to note that these “harder” tasks may not only require additional computational resources, but could also invoke strategic reasoning processes mediated by DLPFC ( Yoshida et al., 2010 ; Yamagishi et al., 2016 ). Finally, with well-documented roles in working memory and selection ( Petrides, 2005 ; Badre, 2008 ), DLPFC is also implicated in the Memory-Unification-Control model of language ( Hagoort, 2013 ), serving as the “control” component and mediating processes such as turn-taking and the selection of contextually appropriate meanings. The motor-associated regions we observed, including premotor cortex, precentral gyrus, and supplementary motor area, have also been hypothesized to belong to this same network as DLPFC ( Fedorenko et al., 2013 ). While additional work is needed to determine more precisely the cortical constituents underlying the social and executive components of language, our extended neurobiological model of language proposes that a neural network supporting common elements of discourse such as indirect speech act comprehension incorporating these brain regions associated with executive and social functioning.

Other materials have been used to study inferential demands in language, but may be associated with confounds that can limit interpretation. For example, recent work using fMRI ( Paunov et al., 2019 ) has demonstrated that story comprehension elicits synchronized network activity not only in traditional language-associated regions, but also in social regions including mPFC, TPJ, and precuneus. Similar results have been reported elsewhere ( Xu et al., 2005 ; Mar, 2011 ; AbdulSabur et al., 2014 ). A third, fronto-parietal network associated with executive control, possibly related to working memory, has also been implicated in story comprehension ( Raposo and Marques, 2013 ; Smirnov et al., 2014 ; Mineroff et al., 2018 ; Aboud et al., 2019 ; Paunov et al., 2019 ). Although these results are promising, narratives are inherently long, which makes them difficult to control experimentally and overly dependent on task-related executive resources.

Another common approach to examining the inferential component of discourse has been the study of non-literal or figurative language, including sarcasm, irony, metaphors, idioms, and proverbs (see Rapp et al., 2012 for a comprehensive review). This body of work also implicates social and executive components in the comprehension of pragmatic language ( Wakusawa et al., 2007 ; Bohrn et al., 2012 ; Uchiyama et al., 2012 ; Iskandar and Baird, 2014 ; Obert et al., 2016 ; Filik et al., 2019 ). While informative, these materials are often subject to confounds related to varying degrees of familiarity and emotional valence, among others ( Nippold and Haq, 1996 ; Schmidt and Seger, 2009 ; Ziv et al., 2011 ; Kaiser et al., 2013 ). To minimize confounds associated with lengthy, contextualizing narratives, differential familiarity, and emotional valence, we opted to study the neural basis for inferential reasoning in discourse by studying indirect speech acts, where we found that comprehension encompasses a core, left peri-Sylvian language network as well as cortical regions often encompassed by executive controls and social networks.

White Matter Correlates of Speech Act Comprehension

Recent work has paid increasing attention to white matter connectivity. While the traditional WLG model of language focuses primarily on the arcuate fasciculus – a component of the superior longitudinal fasciculus connecting IFG and STG, newer work has identified pathways that not only interconnect peri-Sylvian regions, but also connect these regions to extra-Sylvian regions ( Catani et al., 2005 ; Dick and Tremblay, 2012 ). Analogous to the visual system, these pathways may be divided into dorsal and ventral streams. One characterization implicates the dorsal stream as broadly involved in auditory-motor integration and the ventral stream in mapping form to meaning ( Hickok and Poeppel, 2004 ; Saur et al., 2008 ). Using voxel-based fractional anisotropy analyses, we find evidence implicating tracts in both dorsal and ventral streams in indirect speech act comprehension. This includes the uncinate and inferior fronto-occipital fasciculi in the ventral stream, and the superior longitudinal fasciculus and frontal aslant in the dorsal stream. The frontal aslant, in particular, is a recently discovered tract implicated in both speech and language (on the left) and executive function (on the right) ( Varriano et al., 2018 ; Dick et al., 2019 ). The frontal aslant, which is thought to project between the IFG and the supplementary motor areas, has previously been implicated in verbal fluency deficits in aphasic variants of frontotemporal dementia known as primary progressive aphasia ( Catani et al., 2013 ). The uncinate fasciculus, carrying projections between orbitofrontal cortex and anterior temporal regions, has also gained more attention recently as a white matter tract mediating the interaction of social and language networks. For example, damage to the uncinate fasciculus is bvFTD is associated not only with a bvFTD diagnosis, but also with deficits in non-literal language comprehension including sarcasm and irony ( Agosta et al., 2012 ; Downey et al., 2015 ). Thus, we propose that cortical components of our extended language network are integrated by white matter projections in both dorsal and ventral projection streams.

Strengths of our study include the novel task design with carefully matched direct and indirect conditions and minimal task-related demands, observation of a significant indirect language impairment in a non-aphasic brain-damaged cohort with selective social and executive deficits, an association of indirect speech with deficits on measures of task-switching and social functioning, a relationship between indirect speech act performance and real-world communicative efficacy, and robust association of these deficits with an anatomic network implicating language, social, and executive networks. Nevertheless, several caveats should be kept in mind when interpreting our results. Although we tested a relatively large bvFTD cohort and demonstrated specificity with a brain-damaged control group, our cohort is small, patients are not pathologically confirmed, we cannot specify the exact anatomic extent of a “lesion” in neurodegenerative disease, and generalizability is limited to the mild-moderate disease stage. Second, while we confirmed an indirect speech impairment with reaction time data, we report ceiling effects for accuracy in our control subjects, thereby limiting examination of individual differences associated with aging. To differentiate the functions of nodes within the extended language network, future experimental studies should contrast different types of indirect speech, including indirect requests (which have a motor component) and “face-saving” replies (which have an affective component), and assess indirect speech acts in natural discourse. While correlative, it would be valuable to confirm our anatomic observations in healthy young adults using fMRI, and to confirm a causal association for these anatomic observations in bvFTD using longitudinal studies.

The findings discussed here also have meaningful clinical implications. Communication difficulties can compromise social interactions, and in turn, diminish safety, interpersonal relationships and overall well-being. We find that impaired indirect speech is related to communicative efficacy, which is a crucial element of patient safety and quality of life. Accordingly, language deficits may be a target for intervention in bvFTD. Our data also have implications for “best-practice” communication strategies used by patient caregivers: to optimize successful communication, language should be as direct as possible.

With these caveats in mind, we conclude that patients with bvFTD struggle to make the pragmatic inferences necessary to support comprehension of indirect replies, a very common but understudied example of conversational discourse that depends on inferencing. This is due in part to social-executive deficits and degradation of a multimodal, extra-Sylvian network supporting natural, daily language use. More specifically, our findings emphasize the extension of the brain’s traditional language network beyond left peri-Sylvian regions into additional frontal and parietal regions that may be incorporated into a real-world language network that supports everyday discourse. We conclude by emphasizing the importance of studying language in context—the way in which we use it in everyday life. Indeed, it is only when we study language in this way—as a means of communication—that we can begin to characterize the full extent of its neurobiologic foundations.

Data Availability Statement

Ethics statement.

The studies involving human participants were reviewed and approved by Institutional Review Board, University of Pennsylvania. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

MH did the design of experiments, performing statistical analyses and initial draft of report. EH, MU, and DI did the data collection and reading and editing manuscript. CO performed statistical analyses and read and edited the manuscript. NN read and edited the manuscript. MG did the design of experiments, reading and editing manuscript, and funding. All authors contributed to the article and approved the submitted version.

This work was supported in part by grants from NIH (AG066597, AG017586, AG052943, AG054519, and NS101863), the Wyncote Foundation, and an anonymous donor.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank the radiographers at the Hospital of the University of Pennsylvania for their assistance with data collection and our patients and volunteers for their continued participation.

  • ^ https://cran.r-project.org/
  • ^ https://github.com/ANTsX/ANTs
  • ^ http://fsl.fmrib.ox.ac.uk/fsl/fslwiki
  • ^ http://www.humanconnectome.org/software/connectome-workbench.html
  • ^ https://github.com/ftdc-picsl/QuANTs/tree/master/R

AbdulSabur, N. Y., Xu, Y., Liu, S., Chow, H. M., Baxter, M., Carson, J., et al. (2014). Neural correlates and network connectivity underlying narrative production and comprehension: a combined fMRI and PET study. Cortex 57, 107–127. doi: 10.1016/j.cortex.2014.01.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Aboud, K. S., Bailey, S. K., Del Tufo, S. N., Barquero, L. A., and Cutting, L. E. (2019). Fairy tales versus facts: genre matters to the developing brain. Cereb. Cortex 29, 4877–4888. doi: 10.1093/cercor/bhz025

Agosta, F., Scola, E., Canu, E., Marcone, A., Magnani, G., Sarro, L., et al. (2012). White matter damage in frontotemporal lobar degeneration spectrum. Cereb. Cortex 22, 2705–2714. doi: 10.1093/cercor/bhr288

Albert, M. S., DeKosky, S. T., Dickson, D., Dubois, B., Feldman, H. H., Fox, N. C., et al. (2011). The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations the national institute on Aging-Alzheimer’s association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 7, 270–279. doi: 10.1016/j.jalz.2011.03.008

Alexander, D. C., Barker, G. J., and Arridge, S. R. (2002). Detection and modeling of non-Gaussian apparent diffusion coefficient profiles in human brain data. Magn. Reson. Med. 48, 331–340. doi: 10.1002/mrm.10209

Ash, S., Moore, P., Antani, S., McCawley, G., Work, M., and Grossman, M. (2006). Trying to tell a tale: discourse impairments in progressive aphasia and frontotemporal dementia. Neurology 66, 1405–1413. doi: 10.1212/01.wnl.0000210435.72614.38

Avants, B. B., Epstein, C. L., Grossman, M., and Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly an neurodegenerative brain. Med. Image Anal. 12, 26–41. doi: 10.1016/j.media.2007.06.004

Badre, D. (2008). Cognitive control, hierarchy, and the rostro-caudal organization of the frontal lobes. Trends Cogn. Sci. 12, 193–200. doi: 10.1016/j.tics.2008.02.004

Badre, D., and Wagner, A. D. (2006). Computational and neurobiological mechanisms underlying cognitive flexibility. Proc. Natl. Acad. Sci. U.S.A. 103, 7186–7191. doi: 10.1073/pnas.0509550103

Baez, S., Pinasco, C., Roca, M., Ferrari, J., Couto, B., García-Cordero, I., et al. (2016). Brain structural correlates of executive and social cognition profiles in behavioral variant frontotemporal dementia and elderly bipolar disorder. Neuropsychologia 126, 159–169. doi: 10.1016/j.neuropsychologia.2017.02.012

Basnáková, J., Weber, K., Petersson, K. M., van Berkum, J., and Hagoort, P. (2013). Beyond the language given: the neural correlates of inferring speaker meaning. Cereb. Cortex 24, 2572–2578. doi: 10.1093/cercor/bht112

Benton, A. L., Sivan, A. B., Hamsher, K. de S., Varney, N. R., and Spreen, O. (1983). Judgment of Line Orientation. Contributions to Neuropsychological Assessment – A Clinical Manual 25414BC. Oxford: Oxford University Press, Inc.

Google Scholar

Bell, J. S. (2002). Narrative inquiry: more than just telling stories. TESOL Q. 36, 207–213. doi: 10.2307/3588331

CrossRef Full Text | Google Scholar

Bertoux, M., Callaghan, C. O., Dubois, B., and Hornberger, M. (2016). In two minds : executive functioning versus theory of mind in behavioural variant frontotemporal dementia. J. Neurol. Neurosurg. Psychiatry 87, 231–234. doi: 10.1136/jnnp-2015-311643

Binder, J. R., Desai, R. H., Graves, W. W., and Conant, L. L. (2009). Where is the semantic system? a critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 19, 2767–2796. doi: 10.1093/cercor/bhp055

Binder, J. R., Frost, J. A., Hammeke, T. A., Cox, R. W., Rao, S. M., and Prieto, T. (1997). Human brain language areas identified by functional magnetic resonance imaging. J. Neurosci. 17, 353–362. doi: 10.1523/jneurosci.17-01-00353.1997

Bohrn, I. C., Altmann, U., and Jacobs, A. M. (2012). Looking at the brains behind figurative language-A quantitative meta-analysis of neuroimaging studies on metaphor, idiom, and irony processing. Neuropsychologia 50, 2669–2683. doi: 10.1016/j.neuropsychologia.2012.07.021

Brysbaert, M., and New, B. (2009). Moving beyond Kucera and Francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behav. Res. Methods 41, 977–990. doi: 10.3758/BRM.41.4.977

Brysbaert, M., Warriner, A. B., and Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behav. Res. Methods 46, 904–911. doi: 10.3758/s13428-013-0403-5

Cardoso, S., Silva, D., Maroco, J., De Mendonça, A., and Guerreiro, M. (2014). Non-literal language deficits in mild cognitive impairment. Psychogeriatrics 14, 222–228. doi: 10.1111/psyg.12101

Catani, M., Jones, D. K., and Ffytche, D. H. (2005). Perisylvian language networks of the human brain. Ann. Neurol. 57, 8–16. doi: 10.1002/ana.20319

Catani, M., Mesulam, M. M., Jakobsen, E., Malik, F., Martersteck, A., Wieneke, C., et al. (2013). A novel frontal pathway underlies verbal fluency in primary progressive aphasia. Brain 136, 2619–2628. doi: 10.1093/brain/awt163

Cavanna, A. E., and Trimble, M. R. (2006). The precuneus: a review of its functional anatomy and behavioural correlates. Brain 129, 564–583. doi: 10.1093/brain/awl004

Champagne-Lavau, M., and Stip, E. (2010). Pragmatic and executive dysfunction in schizophrenia. J. Neurolinguistics 23, 285–296. doi: 10.1016/j.jneuroling.2009.08.009

Charles, D., Olm, C., Powers, J., Ash, S., Irwin, D. J., McMillan, C. T., et al. (2014). Grammatical comprehension deficits in non-fluent/agrammatic primary progressive aphasia. J. Neurol. Neurosurg. Psychiatry 85, 249–256. doi: 10.1136/jnnp-2013-305749

Clark, H. H. (1979). Responding to indirect speech acts. Cogn. Psychol. 11, 430–477. doi: 10.1016/0010-0285(79)90020-3

Cook, P. A., Bai, Y., Nedjati-Gilani, S., Seunarine, K. K., Hall, M. G., Parker, G. J., et al. (2006). “Camino: Open-Source Diffusion-MRI Reconstruction and Processing,” in Proceedings of the 13th Scientific Meeting, International Society for Magnetic Resonance in Medicine , Seattle, WA, 2759.

Cousins, K. A. Q., Ash, S., Irwin, D. J., and Grossman, M. (2017). Dissociable substrates underlie the production of abstract and concrete nouns. Brain Lang. 165, 45–54. doi: 10.1016/j.bandl.2016.11.003

Demberg, V., and Sayeed, A. (2016). The frequency of rapid pupil dilations as a measure of linguistic processing difficulty. PLoS One 11:e0146194. doi: 10.1371/journal.pone.0146194

Dick, A. S., Garic, D., Graziano, P., and Tremblay, P. (2019). The frontal aslant tract (FAT) and its role in speech, language and executive function. Cortex 111, 148–163. doi: 10.1016/j.cortex.2018.10.015

Dick, A. S., and Tremblay, P. (2012). Beyond the arcuate fasciculus: Consensus and controversy in the connectional anatomy of language. Brain 135, 3529–3550. doi: 10.1093/brain/aws222

Downey, L. E., Mahoney, C. J., Buckley, A. H., Golden, H. L., Henley, S. M., Schmitz, N., et al. (2015). White matter tract signatures of impaired social cognition in frontotemporal lobar degeneration. Neuroimage Clin. 8, 640–651. doi: 10.1016/j.nicl.2015.06.005

Dufour, N., Redcay, E., Young, L., Mavros, P. L., Moran, J. M., Triantafyllou, C., et al. (2013). Similar brain activation during false belief tasks in a large sample of adults with and without autism. PLoS One 8:e75468. doi: 10.1371/journal.pone.0075468

Duncan, J. (2010). The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn. Sci. 14, 172–179. doi: 10.1016/j.tics.2010.01.004

Eslinger, P. J., Moore, P., Troiani, V., Antani, S., Cross, K., Kwok, S., et al. (2007). Oops! resolving social dilemmas in frontotemporal dementia. J. Neurol. Neurosurg. Psychiatry 78, 457–460. doi: 10.1136/jnnp.2006.098228

Eviatar, Z., and Just, M. A. (2006). Brain correlates of discourse processing: an fMRI investigation of irony and conventional metaphor comprehension. Neuropsychologia 44, 2348–2359. doi: 10.1016/j.neuropsychologia.2006.05.007

Farag, C., Troiani, V., Bonner, M., Powers, C., Avants, B., Gee, J., et al. (2010). Hierarchical organization of scripts: converging evidence from FMRI and frontotemporal degeneration. Cereb. Cortex 20, 2453–2463. doi: 10.1093/cercor/bhp313

Fedorenko, E., and Thompson-Schill, S. L. (2014). Reworking the language network. Trends Cogn. Sci. 18, 120–126. doi: 10.1016/j.tics.2013.12.006

Fedorenko, E., Duncan, J., and Kanwisher, N. (2013). Broad domain generality in focal regions of frontal and parietal cortex. Proc. Natl. Acad. Sci.U.S.A. 110, 16616–16621. doi: 10.1073/pnas.1315235110

Fedorenko, E., Hsieh, P.-J., Nieto-Castanon, A., Whitfield-Gabrieli, S., and Kanwisher, N. (2010). New method for fMRI investigations of language: defining ROIs functionally in individual subjects. J. Neurophysiol. 104, 1177–1194. doi: 10.1152/jn.00032.2010

Feng, W., Wu, Y., Jan, C., Yu, H., Jiang, X., and Zhou, X. (2017). Effects of contextual relevance on pragmatic inference during conversation: an fMRI study. Brain Lang. 171, 52–61. doi: 10.1016/j.bandl.2017.04.005

Ferstl, E. C., Neumann, J., Bogler, C., and von Cramon, D. Y. (2008). The extended language network: a meta-analysis of neuroimaging studies on text comprehension. Hum. Brain Mapp. 29, 581–593. doi: 10.1002/hbm.20422

Ferstl, E. C., and von Cramon, D. Y. (2002). What does the frontomedian cortex contribute to language processing: coherence or theory of mind? Neuroimage 17, 1599–1612. doi: 10.1006/nimg.2002.1247

Filik, R., Ţurcan, A., Ralph-Nearman, C., and Pitiot, A. (2019). What is the difference between irony and sarcasm? An fMRI study. Cortex 115, 112–122. doi: 10.1016/j.cortex.2019.01.025

Friederici, A. D. (2015). White-matter pathways for speech and language processing. Handb. Clin. Neurol. 129, 177–186. doi: 10.1016/B978-0-444-62630-1.00010-X

Frith, C. D., and Frith, U. (2012). Mechanisms of social cognition. Annu Rev Psychol. 63, 287–313. doi: 10.1146/annurev-psych-120710-100449

Gauthier, S., Reisberg, B., Zaudig, M., Petersen, R. C., Ritchie, K., Broich, K., et al. (2010). Mild cognitive impairment. Lancet 367, 1262–1270.

Gollan, T. H., Weissberger, G. H., Runnqvist, E., Montoya, R. I., and Cera, C. M. (2012). Self969 ratings of spoken langage dominance: a multi-lingual naming test (MINT) and preliminary norms for young and aging Spanish-English bilinguals. Biling (Camb. Engl.) 15, 594–615. doi: 10.1017/S1366728911000332

Grice, H. P. (1975). “Logic and conversation,” in Syntax and Semantics , Vol. 3, eds P. Cole and J. Morgan (New York: Academic Press), 41–58.

Grossman, M. (2018). Linguistic aspects of primary progressive aphasia. Annu. Rev. Linguist. 4, 377–403. doi: 10.1146/annurev-linguistics-011516-034253

Hagoort, P. (2005). On Broca, brain, and binding: a new framework. Trends Cogn. Sci. 9, 416–423. doi: 10.1016/j.tics.2005.07.004

Hagoort, P. (2013). MUC (Memory, Unification, Control) and beyond. Front. Psychol. 4:416. doi: 10.3389/fpsyg.2013.00416

Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca’s region and beyond. Curr. Opin. Neurobiol. 28, 136–141. doi: 10.1016/j.conb.2014.07.013

Hassabis, D., Kumaran, D., and Maguire, E. A. (2007). Using imagination to understand the neural basis of episodic memory. J. Neurosci. 27, 14365–14374. doi: 10.1523/jneurosci.4549-07.2007

Hasson, U., Egidi, G., Marelli, M., and Willems, R. M. (2018). Grounding the neurobiology of language in first principles: the necessity of non-language-centric explanations for language comprehension. Cognition 180, 135–157. doi: 10.1016/j.cognition.2018.06.018

Healey, M. L., and Grossman, M. (2018). Cognitive and affective perspective-taking: evidence for shared and dissociable anatomical substrates. Front. Neurol. 9:491. doi: 10.3389/fneur.2018.00491

Herold, D., Spengler, S., Sajonz, B., Usnich, T., and Bermpohl, F. (2016). Common and distinct networks for self-referential and social stimulus processing in the human brain. Brain Struct. Funct. 221, 3475–3485. doi: 10.1007/s00429-015-1113-9

Hickok, G., and Poeppel, D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92, 67–99. doi: 10.1016/j.cognition.2003.10.011

Iskandar, S., and Baird, A. D. (2014). The role of working memory and divided attention in metaphor interpretation. J. Psycholinguist. Res. 43, 555–568. doi: 10.1007/s10936-013-9267-1

Jang, G., Yoon, S. A., Lee, S. E., Park, H., Kim, J., Ko, J. H., et al. (2013). Everyday conversation requires cognitive inference: neural bases of comprehending implicated meanings in conversations. Neuroimage 81, 61–72. doi: 10.1016/j.neuroimage.2013.05.027

Johnson, J. E., and Turkstra, L. S. (2012). Inference in conversation of adults with traumatic brain injury. Brain Inj. 26, 1118–1126. doi: 10.3109/02699052.2012.666370

Johnson, M. R., Mitchell, K. J., Raye, C. L., D’Esposito, M., and Johnson, M. K. (2007). A brief thought can modulate activity in extrastriate visual areas: top-down effects of refreshing just-seen visual stimuli. Neuroimage 37, 290–299. doi: 10.1016/j.neuroimage.2007.05.017

Kaiser, N. C., Lee, G. J., Lu, P. H., Mather, M. J., Shapira, J., Jimenez, E., et al. (2013). What dementia reveals about proverb interpretation and its neuroanatomical correlates. Neuropsychologia 51, 1726–1733. doi: 10.1016/j.neuropsychologia.2013.05.021

Kellas, J. K. (2005). Family ties: communicating identity through jointly told family stories. Commun. Monogr. 72, 365–389. doi: 10.1080/03637750500322453

Klein, A., Ghosh, S. S., Avants, B., Yeo, B. T., Fischl, B., Ardekani, B., et al. (2010). Evaluation of volume-based and surface-based brain image registration methods. Neuroimage 51, 214–220. doi: 10.1016/j.neuroimage.2010.01.091

Knopman, D., Weintraub, S., and Pankratz, V. (2011). Language and behavior domains enhance the value of the clinical dementia rating scale. Alzheimers Dement. 7, 293–299. doi: 10.1016/j.jalz.2010.12.006

Kramer, J. H., Jurik, J., Sha, S. J., Rankin, K. P., Rosen, H. J., Johnson, J. K., et al. (2003). Distinctive neuropsychological patterns in frontotemporal dementia, semantic dementia, and Alzheimer Disease. Cogn. Behav. Neurol. 16, 211–218. doi: 10.1097/00146965-200312000-00002

Kuperberg, G. R., Lakshmanan, B. M., Caplan, D. N., and Holcomb, P. J. (2006). Making sense of discourse: an fMRI study of causal inferencing across sentences. Neuroimage 33, 343–361. doi: 10.1016/j.neuroimage.2006.06.001

Le Bouc, R., Lenfant, P., Delbeuck, X., Ravasi, L., Lebert, F., Semah, F., et al. (2012). My belief or yours? Differential theory of mind deficits in frontotemporal dementia and Alzheimer’s disease. Brain 135, 3026–3038. doi: 10.1093/brain/aws237

Levinson, S. C. (2016). Turn-taking in human communication – origins and implications for language processing. Trends Cogn. Sci. 20, 6–14. doi: 10.1016/j.tics.2015.10.010

Leyhe, T., Saur, R., Eschweiler, G., and Milian, M. (2011). Impairment in proverb interpretation as an executive function deficit in patients with amnestic mild cognitive impairment and early Alzheimer’s Disease. Dement. Geriatr. Cogn. Disord. 1, 51–61. doi: 10.1159/000323864

Libon, D., Mattson, R. E., Glosser, G., Kaplan, E., Malamut, B. L., Sands, L. P., et al. (1996). A nine-word dementia version of the California verbal learning test. Clin. Neuropsychol. 10, 236–244. doi: 10.1080/13854049608406686

Libon, D. J., Rascovsky, K., Gross, R. G., White, M. T., Xie, S. X., Dreyfuss, M., et al. (2011). The philadelphia brief assessment of cognition (PBAC): a validated screening measure for dementia. Clin. Neuropsychol. 25, 1314–1330. doi: 10.1080/13854046.2011.631585

Libon, D. J., Xie, S. X., Moore, P., Farmer, J., Antani, S., McCawley, G., et al. (2007). Patterns of neuropsychological impairment in frontotemporal dementia. Neurology 68, 369–375. doi: 10.1212/01.wnl.0000252820.81313.9b

Lieberman, M. D., and Cunningham, W. A. (2009). Type I and Type II error concerns in fMRI research: re-balancing the scale. Soc. Cogn. Affect. Neurosci. 4, 423–428. doi: 10.1093/scan/nsp052

Lieberman, M. D., Straccia, M. A., Meyer, M. L., Du, M., and Tan, K. M. (2019). Social, self, (situational), and affective processes in medial prefrontal cortex (MPFC): causal, multivariate, and reverse inference evidence. Neurosci. Biobehav. Rev. 99, 311–328. doi: 10.1016/j.neubiorev.2018.12.021

Lough, S., Gregory, C., and Hodges, J. R. (2001). Dissociation of social cognition and executive function in frontal variant frontotemporal dementia. Neurocase 7, 123–113. doi: 10.1093/neucas/7.2.123

Lundstrom, B. N., Ingvar, M., and Petersson, K. M. (2005). The role of precuneus and left inferior frontal cortex during source memory episodic retrieval. Neuroimage 27, 824–834. doi: 10.1016/j.neuroimage.2005.05.008

Mar, R. A. (2011). The neural bases of social cognition and story comprehension. Annu. Rev. Psychol. 62, 103–134. doi: 10.1146/annurev-psych-120709-145406

Marcus, D. S., Fotenos, A. F., Csernansky, J. G., Morris, J. C., and Buckner, R. L. (2010). Open Access Series Of Imaging Studies (OASIS): cross-sectional mri data in young, middle ged, nondemented, and demented older adults. J. Cogn. Neurosci. 22, 2677–2684. doi: 10.1162/jocn.2009.21407

Mashal, N., Vishne, T., and Laor, N. (2014). The role of the precuneus in metaphor comprehension: evidence from an fMRI study in people with schizophrenia and healthy participants. Front. Hum. Neurosci. 8:818. doi: 10.3389/fnhum.2014.00818

Mason, R. A., and Just, M. A. (2004). How the brain processes causal inferences in text. Psychol. Sci. 15, 1–7. doi: 10.1111/j.0963-7214.2004.01501001.x

Mineroff, Z., Blank, I. A., Mahowald, K., and Fedorenko, E. (2018). A robust dissociation among the language, multiple demand, and default mode networks: Evidence from inter-region correlations in effect size. Neuropsychologia 119, 501–511. doi: 10.1016/j.neuropsychologia.2018.09.011

Möller, C., Hafkemeijer, A., Pijnenburg, Y. A. L., Rombouts, S. A. R. B., van der Grond, J., Dopper, E., et al. (2016). Different patterns of cortical gray matter loss over time in behavioral variant frontotemporal dementia and Alzheimer’s disease. Neurobiol. Aging 38, 21–31. doi: 10.1016/j.neurobiolaging.2015.10.020

Nevler, N., Ash, S., Jester, C., Irwin, D. J., Liberman, M., and Grossman, M. (2017). Automatic measurement of prosody in behavioral variant FTD. Neurology 89, 650–656. doi: 10.1212/WNL.0000000000004236

Nippold, M. A., and Haq, F. S. (1996). Proverb comprehension in youth: the role of concreteness and familiarity. J. Speech Hear. Res. 39, 166–176. doi: 10.1044/jshr.3901.166

Novais-Santos, S., Gee, J., Shah, M., Troiani, V., Work, M., and Grossman, M. (2007). Resolving sentence ambiguity with planning and working memory resources: evidence from fMRI. Neuroimage 37, 361–378. doi: 10.1016/j.neuroimage.2007.03.077

Obert, A., Gierski, F., Calmus, A., Flucher, A., Portefaix, C., Pierot, L., et al. (2016). Neural correlates of contrast and humor: processing common features of verbal irony. PLoS One 11:e0166704. doi: 10.1371/journal.pone.0166704

Orange, J., Lubinski, R., Johnson, A., Purves, B., and Small, J. (2009). “Validity analysis of a measure of conversation in dementia,” in Proceedings of the Clinical Aphasiology Conference , Keystone, CO, Vol. 15.

Panchal, H., Paholpak, P., Lee, G., Carr, A., Barsuglia, J. P., Mather, M., et al. (2015). Neuropsychological and neuroanatomical correlates of the social norms questionnaire in frontotemporal dementia versus Alzheimer’s Disease. Am. J. Alzheimers Dis. Other Demen. 31, 326–332. doi: 10.1177/1533317515617722

Pastor-Cerezuela, G., Yllescas, J. C. T., González-Sala, F., Montagut-Asunción, M., and Fernández-Andrés, M. I. (2018). Comprehension of generalized conversational implicatures by children with and without autism spectrum disorder. Front. Psychol. 9:272. doi: 10.3389/fpsyg.2018.00272

Paunov, A., Blank, I., and Fedorenko, E. (2019). Functionally distinct language and theory of mind networks are synchronized at rest and during language comprehension abbreviated title: language and theory of mind networks. J. Neurophysiol. 121, 1244–1265. doi: 10.1152/jn.00619.2018

Pérez, A. I., Paolieri, D., Macizo, P., and Bajo, T. (2014). The role of working memory in inferential sentence comprehension. Cogn. Process. 15, 405–413. doi: 10.1007/s10339-014-0611-7

Petrides, M. (2005). Lateral prefrontal cortex: architectonic and functional organization. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 360, 781–795. doi: 10.1098/rstb.2005.1631

Prat, C. S., Mason, R. A., and Just, M. A. (2011). Individual differences in the neural basis of causal inferencing. Brain Lang. 116, 1–13. doi: 10.1016/j.bandl.2010.08.004

Price, C. J. (2000). The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359. doi: 10.1046/j.1469-7580.2000.19730335.x

Rapp, A. M., Mutschler, D. E., and Erb, M. (2012). Where in the brain is nonliteral language? A coordinate-based meta-analysis of functional magnetic resonance imaging studies. Neuroimage 63, 600–610. doi: 10.1016/j.neuroimage.2012.06.022

Raposo, A., and Marques, J. F. (2013). The contribution of fronto-parietal regions to sentence comprehension: insights from the Moses illusion. Neuroimage 83, 431–437. doi: 10.1016/j.neuroimage.2013.06.052

Rascovsky, K., Hodges, J. R., Knopman, D., Mendez, M. F., Kramer, J. H., Neuhaus, J., et al. (2011). Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain 134, 2456–2477. doi: 10.1093/brain/awr179

Reitan, R. M. (1958). Validity of the Trail Making Test as an indicator of organic brain damage. Percept. Mot. Skills. 8, 271–276. doi: 10.2466/pms.8.7.271-276

Rolls, E. T. (2004). The functions of the orbitofrontal cortex. Brain Cogn. 55, 11–29. doi: 10.1016/S0278-2626(03)00277-X

Rorden, C., and Brett, M. (2000). Stereotaxic display of brain lesions. Behav. Neurol. 12, 191–200. doi: 10.1155/2000/421719

Sabbagh, M. A. (2004). Understanding orbitofrontal contributions to theory-of-mind reasoning: implications for autism. Brain Cogn. 55, 209–219. doi: 10.1016/j.bandc.2003.04.002

Salvador, R., Peña, A., Menon, D. K., Carpenter, T. A., Pickard, J. D., and Bullmore, E. T. (2005). Formal characterization and extension of the linearized diffusion tensor model. Hum. Brain. Mapp. 24, 144–155. doi: 10.1002/hbm.20076

Saur, D., Kreher, B. W., Schnell, S., Kummerer, D., Kellmeyer, P., Vry, M.-S., et al. (2008). Ventral and dorsal pathways for language. Proc. Natl. Acad. Sci. U.S.A. 105, 18035–18040.

Savundranayagam, M. Y., and Orange, J. B. (2011). Relationships between appraisals of caregiver communication strategies and burden among spouses and adult children. Int. Psychogeriatr. 23, 1470–1478. doi: 10.1017/s1041610211000408

Saxe, R., and Kanwisher, N. (2003). People thinking about thinking people: the role of the temporo-parietal junction in “theory of mind.”. Neuroimage 19, 1835–1842. doi: 10.1016/s1053-8119(03)00230-1

Schmidt, G. L., and Seger, C. A. (2009). Neural correlates of metaphor processing: the roles of figurativeness, familiarity and difficulty. Brain Cogn. 71, 375–386. doi: 10.1016/j.bandc.2009.06.001

Searle, J. (1975). “Indirect speech acts,” in Syntax and Semantics , Vol. 3, eds P. Cole and J. Morgan (Cambridge, MA: Academic Press), 59–82. doi: 10.1163/9789004368811_004

Shallice, T., Fletcher, P., Frith, C. D., Grasby, P., Frackowiak, R. S., and Dolan, R. J. (1994). Brain regions associated with acquisition and retrieval of verbal episodic memory. Nature 368, 633–635. doi: 10.1038/368633a0

Shibata, M., Abe, J. I., Itoh, H., Shimada, K., and Umeda, S. (2011). Neural processing associated with comprehension of an indirect reply during a scenario reading task. Neuropsychologia 49, 3542–3550. doi: 10.1016/j.neuropsychologia.2011.09.006

Shirer, W. R., Ryali, S., Rykhlesvskaia, E., Menon, V., and Greicius, M. D. (2012). Decoding subject-driven cognitive states with whole-brain connectivity patterns. Cereb. Cortex 22, 1–8. doi: 10.1093/cercor/bhr099

Siebörger, F. T., Ferstl, E. C., and von Cramon, D. Y. (2007). Making sense of nonsense: an fMRI study of task induced inference processes during discourse comprehension. Brain Res. 1166, 77–91. doi: 10.1016/j.brainres.2007.05.079

Smirnov, D., Glerean, E., Lahnakoski, J. M., Salmi, J., Jääskeläinen, I. P., Sams, M., et al. (2014). Fronto-parietal network supports context-dependent speech comprehension. Neuropsychologia 63, 293–303. doi: 10.1016/j.neuropsychologia.2014.09.007

Smith, S. M., and Nichols, T. E. (2009). Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage 44, 83–98. doi: 10.1016/j.neuroimage.2008.03.061

Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., et al. (2009). Universals and cultural variation in turn-taking in conversation. Proc. Natl. Acad. Sci. U.S.A. 106, 10587–10592. doi: 10.1073/pnas.0903616106

Torralva, T., Gleichgerrcht, E., Ardila, M. J. T., Roca, M., and Manes, F. F. (2015). Differential cognitive and affective theory of mind abilities at mild and moderate stages of behavioral variant frontotemporal dementia. Cogn. Behav. Neurol. 28, 63–70. doi: 10.1097/WNN.0000000000000053

Tompkins, C., Bloise, C., Timko, M., and Baumgaertner, A. (1994). Working memory and inference revision in brain-damaged and normally aging adults. J. Speech Hear. Res. 37, 896–912. doi: 10.1044/jshr.3704.896

Tremblay, P., and Dick, A. S. (2016). Broca and wernicke are dead, or moving past the classic model of language neurobiology. Brain Lang. 162, 60–71. doi: 10.1016/j.bandl.2016.08.004

Tustison, N. J., Cook, P. A., Klein, A., Song, G., Das, S. R., Duda, J. T., et al. (2014). Large-scale evaluation of ANTs and FreeSurfer cortical thickness measurements. Neuroimage 99, 166–179. doi: 10.1016/j.neuroimage.2014.05.044

Uchiyama, H. T., Saito, D. N., Tanabe, H. C., Harada, T., Seki, A., Ohno, K., et al. (2012). Distinction between the literal and intended meanings of sentences: a functional magnetic resonance imaging study of metaphor and sarcasm. Cortex 48, 563–583. doi: 10.1016/j.cortex.2011.01.004

Varriano, F., Pascual-Diaz, S., and Prats-Galino, A. (2018). When the FAT goes wide: Right extended Frontal Aslant Tract volume predicts performance on working memory tasks in healthy humans. PLoS One 13:e0200786. doi: 10.1371/journal.pone.0200786

Vassal, F., Schneider, F., Boutet, C., Jean, B., Sontheimer, A., and Lemaire, J. J. (2016). Combined DTI tractography and functional MRI study of the language connectome in healthy volunteers: Extensive mapping of white matter fascicles and cortical activations. PLoS One 11:e0152614. doi: 10.1371/journal.pone.0152614

Wakusawa, K., Sugiura, M., Sassa, Y., Jeong, H., Horie, K., Sato, S., et al. (2007). Comprehension of implicit meanings in social situations involving irony: a functional MRI study. Neuroimage 37, 1417–1426. doi: 10.1016/j.neuroimage.2007.06.013

Wechsler, D. (1997). Wechsler Memory Scale-Third Edition: Manual. San Antonio, TX: Psychological Corporation.

Winkler, A. M., Ridgway, G. R., Webster, M. A., Smith, S. M., and Nichols, T. E. (2014). Permutation inference for the general linear model. Neuroimage 92, 381–397. doi: 10.1016/j.neuroimage.2014.01.060

Wright, H. H., and Newhoff, M. (2002). Age-related differences in inference revision processing. Brain Lang. 80, 226–239. doi: 10.1006/brln.2001.2595

Xu, J., Kemeny, S., Park, G., Frattali, C., and Braun, A. (2005). Language in context: emergent features of word, sentence, and narrative comprehension. Neuroimage 25, 1002–1015. doi: 10.1016/j.neuroimage.2004.12.013

Yamagishi, T., Takagishi, H., Fermin Ade, S., Kanai, R., Li, Y., and Matsumoto, Y. (2016). Cortical thickness of the dorsolateral prefrontal cortex predicts strategic choices in economic games. Proc. Natl. Acad. Sci.U.S.A. 113, 5582–5587. doi: 10.1073/pnas.1523940113

Yoshida, W., Seymour, B., Friston, K. J., and Dolan, R. J. (2010). Neural mechanisms of belief inference during cooperative games. J. Neurosci. 30, 10744–10751. doi: 10.1523/jneurosci.5895-09.2010

Ziv, I., Leiser, D., and Levine, J. (2011). Social cognition in schizophrenia: cognitive and affective factors. Cogn. Neuropsychiatry 16, 71–91. doi: 10.1080/13546805.2010.492693

Keywords : discourse, comprehension, frontotemporal dementia, behavioral variant frontotemporal dementia, inferencing, frontal lobe, white matter tractography

Citation: Healey M, Howard E, Ungrady M, Olm CA, Nevler N, Irwin DJ and Grossman M (2021) More Than Words: Extra-Sylvian Neuroanatomic Networks Support Indirect Speech Act Comprehension and Discourse in Behavioral Variant Frontotemporal Dementia. Front. Hum. Neurosci. 14:598131. doi: 10.3389/fnhum.2020.598131

Received: 28 August 2020; Accepted: 24 November 2020; Published: 14 January 2021.

Reviewed by:

Copyright © 2021 Healey, Howard, Ungrady, Olm, Nevler, Irwin and Grossman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Murray Grossman, [email protected]

test reported speech variant 1

A1 – Elementary

Practice Grammar Tests for A1 with Answer

A2 – Pre-intermediate

Practice Grammar Tests for A2 with Answer

B1 – Intermediate

Practice Grammar Tests for B1 with Answer

B2 – Upper-intermediate

Practice Grammar Tests for B2 with Answer

C1 – Advanced

Practice Grammar Tests for C1 with Answer

Pre-A1 – STARTERS

Practice Listening Tests for STARTERS with Answer & Audioscript

Practice Listening Tests for A1 with Answer & Audioscript

Practice Listening Tests for B1 with Answer & Audioscript

Practice Listening Tests for B2 with Answer & Audioscript

Practice Reading Tests for STARTERS with Answer

Practice Reading Tests for A1 with Answer

Practice Reading Tests for A2 with Answer

Practice Reading Tests for B1 with Answer

Practice Reading Tests for B2 with Answer

Use of English Tests for A1 with Answer

Use of English Tests for A2 with Answer

Use of English Tests for B1 with Answer

Use of English Tests for B2 with Answer

Practice Writing Tests for STARTERS with Answer

Practice Writing Tests for A1 with Answer

Practice Writing Tests for A2 with Answer

Practice Writing Tests for B1 with Answer

Practice Writing Tests for B2 with Answer

Key (KET) Listening Tests

Key (ket) reading & writing tests.

Practice KET Reading and Wrting Tests with Answer

Preliminary (PET) Listening Tests

Practice PET Listening Tests with Answer & Audioscript

First (FCE) Listening Tests

Practice FCE Listening Tests with Answer & Audioscript

CAE Listening Tests

Practice CAE Listening Tests with Answer & Audioscript

Practice Vocabulary Tests for A1 with Answer

Practice Vocabulary Tests for A2 with Answer

Practice Vocabulary Tests for B1 with Answer

Practice Vocabulary Tests for B2 with Answer

B1 English Grammar Test – Reported Speech Consolidation multiple-choice questions

  • Grammar Tests for B1

Choose the correct answer.

1   “I get up every morning at seven o’clock”, Peter said.

a) Peter said he got up every morning at seven o’clock.

b) Peter said I got up every morning at seven o’clock.

c) Peter said he had got up every morning at seven o’clock.

d) None of these.

2   Tom said, “I want to visit my friends this weekend”.

a) Tom said he wants to visit his friends that weekend.

b) Tom said he wanted to visit his friends that weekend.

c) Tom said he wanted to visit his friends this weekend.

3   She asked me, “When are we going to leave”?

a) She asked me when she was going to leave.

b) She asked me when we were going to leave.

c) She asked me when we are going to leave.

4   They said, “We’ve lived here for a long time”.

a) They said they have lived there for a long time.

b) They said they lived here for a long time.

c) They said they had lived there for a long time.

5   Peter said, “I may bring someone with me to the party”.

a) Peter said he might bring someone with him to the party.

b) Peter said he bring someone with him to the party.

c) Peter said he might bring someone with her to the party.

6   Jack said, “He must be guilty!”

a) Jack said he must guilty.

b) Jack said he must have be guilty.

c) jack said he must have been guilty.

7   He asked me, “Have you finished reading the newspaper”?

a) He asked me if had I finished reading the newspaper.

b) He asked me if I had finished reading the newspaper.

c) He asked me if I finished reading the newspaper.

8   You said, “I will help you”!

a) You said you would help me!

b) You said I would help you!

c) You said you would help her!

9   Jerry said, “I’m studying English a lot at the moment”!

a) Jerry said he was studying English a lot at that moment.

b) Jerry said he was studying English a lot at the moment.

c) Jerry said I was studying English a lot at that moment.

10   She asked her, “How long have you lived here”?

a) Cheryl asked her how long she has lived there.

b) Cheryl asked her how long she lived there.

c) Cheryl asked her how long she had lived there.

11   Susan reassured me, “I can come tonight”.

a) Susan told me I could come that night.

b) Susan told me she could come that night.

c) Susan told me she could come tomorrow evening.

12   She said, “I’ve worked here since I left my last job”.

a) She told me that she worked there since she had left her last job.

b) She told me that she had worked there since she had left her last job.

c) She told me that she had worked there since she left her last job.

13   Mark asked me, “Why do you want to study Russian”?

a) Mark asked her why I wanted to study Russian.

b) Mark asked me why did I want to study Russian.

c) Mark asked me why I wanted to study Russian.

14   He said, “I must get going. Otherwise, I’m going to be late“.

a) He told me he had to get going. Otherwise, he was going to be late.

b) He told me he had to get going. Otherwise, I was going to be late.

c) He told me he has to get going. Otherwise, he was going to be late.

15   She said, “I really wish I had bought that new car”.

a) She told me she really wished she bought that new car.

b) She told me she really had wished she had bought that new car.

c) She told me she really wished she had bought that new car.

16   Tom said, “I travel to exotic places”. This sentence into Reported Speech becomes: Tom said he …  to exotic places.

b) traveled

17   Susan said, “I am going to the store”. This sentence into Reported Speech becomes: Susan said that she … to the store.

b) am going

c) was going

18   Linda told me, “I owned a flower shop”. This sentence into Reported Speech becomes: Linda told me that she … a flower shop.

b) was owing

c) had owned

19   He said, “I will take my mom to the airport”. This sentence into Reported Speech becomes: He said that he … his mother to the airport.

a) will take

c) would take

20   “I was swimming in the pool since 9 am”, he said. This sentence into Reported Speech becomes: He said he … in the pool since 9am.

b) swimming

c) had been swimming

21   Sam said he was going to Europe after graduation.

a) Sam said, “I am going to Europe after graduation”.

b) Sam said, “I was going to Europe after graduation”.

c) Sam said, “I were going to Europe after graduation”.

22   Sharon told me that she would be entering the talent show.

a) Sharon told me, “I enter the talent show”.

b) Sharon told me, “I will be entering the talent show”.

c) Sharon told me, “I would be entering the talent show”.

23   Frank said that the phone had been ringing all day.

a) Frank said, “The phone was ringing all day”.

b) Frank said, “The phone has been ringing all day”.

c) Frank said, “The phone is ringing all day”.

24   Jenny said she would help me with my homework.

a) Jenny said, “I will help you with your homework”.

b) Jenny said, “I would help you with your homework”.

c) Jenny said, “I am helping you with your homework”.

25   Nancy said that she had played basketball in college.

a) Nancy said, “I play basketball in college”.

b) Nancy said, “I am playing basketball in college”.

c) Nancy said, “I played basketball in college”.

26   She said, “I will meet her tomorrow”. This sentence into Reported Speech becomes: She said that she would meet her … .

b) the next day

c) next week

27   He said, “I have a business trip to Boston next week”. This sentence into Reported Speech becomes: He said that he had a business trip to Boston … .

a) this week

b) in two weeks

c) the following week

28   “I got a new car last month”, he told me. This sentence into Reported Speech becomes: He told me that he had gotten a new car … .

a) that month

b) that months

c) the previous month

29   Susie said, “I was sick yesterday”. This sentence into Reported Speech becomes: Susie said that she had been sick … .

b) yesterday

c) the day before

30   “I quit my job today”, he told me. This sentence into Reported Speech becomes: He told me that he had quit his job … .

a) that day

c) the next day

1 a   2 b   3 b    4 c   5 a   6 c   7 b   8 a   9 a   10 c

11 b   12 b   13 c   14 a   15 c   16 b   17 c   18 c   19 c   20 c

21 a   22 b   23 b   24 a   25 c   26 b   27 c   28 c   29 c   30 a

Related Posts

  • B1 English Grammar Test – Determiners Consolidation (2) multiple-choice questions
  • B1 English Grammar Test – Determiners Consolidation multiple-choice questions
  • B1 English Grammar Test – Articles Consolidation multiple-choice questions
  • B1 English Grammar Test – Reported Speech Consolidation (2) multiple-choice questions
  • B1 English Grammar Test – Conjunctions Consolidation (3) multiple-choice questions
  • B1 English Grammar Test – Conjunctions Consolidation (2) multiple-choice questions

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

test reported speech variant 1

  • CAE (C1) Listening Tests
  • FCE (B2) Listening Tests
  • Grammar Tests for A1
  • Grammar Tests for A2
  • Grammar Tests for B2
  • Grammar Tests for C1
  • KET (A2) Listening Tests
  • KET (A2) Reading and Writing Tests
  • Listening Tests for A1
  • Listening Tests for A2
  • Listening Tests for B1
  • Listening Tests for B2
  • Listening Tests for Starters
  • PET (B1) Listening Tests
  • Reading Tests for A1
  • Reading Tests for A2
  • Reading Tests for B1
  • Reading Tests for B2
  • Reading Tests for Starters
  • Use of English for A1
  • Use of English for A2
  • Use of English for B1
  • Use of English for B2
  • Vocabulary Tests for A1
  • Vocabulary Tests for A2
  • Vocabulary Tests for B1
  • Vocabulary Tests for B2
  • Writing Tests for A1
  • Writing Tests for A2
  • Writing Tests for B1
  • Writing Tests for B2
  • Writing Tests for Starters

Pin It on Pinterest

Test: Reported Speech- 1 - Class 9 MCQ

10 questions mcq test - test: reported speech- 1, direction: choose the option which best expresses the given sentence in indirect/direct speech. q. i told him that he was not working hard..

I said to him, "You are not working hard."

I told to him, "You are not working hard."

I said, "You are not working hard."

I said to him, "He is not working hard."

I said to him, "You are not working hard."

I told to him, "You are not working hard."

I said, "You are not working hard."

I said to him, "He is not working hard."

test reported speech variant 1

Direction: Choose the option which best expresses the given sentence in Indirect/Direct speech. Q. He said to his father, "Please increase my pocket-money."

He told his father, "Please increase the pocket-money"

He pleaded his father to please increase my pocket money.

He requested his father to increase his pocket-money.

He asked his father to increase his pocket-money.

Direction: Choose the option which best expresses the given sentence in Indirect/Direct speech. Q. He exclaimed with joy that India had won the Sahara Cup.

He said, "India has won the Sahara Cup"

He said, "India won the Sahara Cup"

He said, "How! India will win the Sahara Cup"

He said, "Hurrah! India has won the Sahara Cup"

Direction: Choose the option which best expresses the given sentence in Indirect/Direct speech.

Q. The man said, "No, I refused to confers guilt."

The man emphatically refused to confers guilt.

The man refused to confers his guilt.

The man told that he did not confers guilt.

The man was stubborn enough to confers guilt.

Q. My cousin said, "My room-mate had snored throughout the night."

my cousin said that her room-mate snored throughout the night.

my cousin told me that her roommate snored throughout the night.

my cousin complained to me that her room-mate is snoring throughout the night.

my cousin felt that her room-mate may be snoring throughout the night.

My cousin said that her room-mate snored throughout the night.

My cousin told me that her roommate snored throughout the night.

My cousin complained to me that her room-mate is snoring throughout the night.

My cousin felt that her room-mate may be snoring throughout the night.

Q. She said to her friend, "I know where is everyone"

She told that she knew where was everyone.

She told her friend that she knew where everyone was.

She told her friend that she knew where is everyone.

She told her friend that she knows where was everyone.

Fill in the blanks with the correct option.

Q. He asked us ____ show our passports.

none of these

Q. She asked us ____ on time.

Q. They asked me _____ going to the party.

that they were

if they were

Q. She said that no one _____ to the meeting last week.

Subject-Verb Agreement:

Past Tense:

Options Analysis:

Conclusion:

Top Courses for Class 9

test reported speech variant 1

Important Questions for Reported Speech- 1

Reported speech- 1 mcqs with answers, online tests for reported speech- 1, welcome back, create your account for free.

test reported speech variant 1

Forgot Password

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Speech Lang Pathol

Clear Speech Variants: An Investigation of Intelligibility and Speaker Effort in Speakers With Parkinson's Disease

Kaila l. stipancic.

a Department of Communicative Disorders and Sciences, University at Buffalo, NY

Frits van Brenk

b Utrecht Institute of Linguistics OTS, Utrecht University, the Netherlands

Alexander Kain

c Department of Pediatrics, Oregon Health & Science University, Portland

Gregory Wilding

d Department of Biostatistics, University at Buffalo, NY

Kris Tjaden

Associated data.

Data supporting the results reported in this article are available for interested researchers on request from the authors.

This study investigated the effects of three clear speech variants on sentence intelligibility and speaking effort for speakers with Parkinson's disease (PD) and age- and sex-matched neurologically healthy controls.

Fourteen speakers with PD and 14 neurologically healthy speakers participated. Each speaker was recorded reading 18 sentences from the Speech Intelligibility Test in their habitual speaking style and for three clear speech variants: clear (SC; given instructions to speak clearly), hearing impaired (HI; given instructions to speak with someone with a hearing impairment), and overenunciate (OE; given instructions to overenunciate each word). Speakers rated the amount of physical and mental effort exerted during each speaking condition using visual analog scales (averaged to yield a metric of overall speaking effort). Sentence productions were orthographically transcribed by 50 naive listeners. Linear mixed-effects models were used to compare intelligibility and speaking effort across the clear speech variants.

Intelligibility was reduced for the PD group in comparison to the control group only in the habitual condition. All clear speech variants significantly improved intelligibility above habitual levels for the PD group, with OE maximizing intelligibility, followed by the SC and HI conditions. Both groups rated speaking effort to be significantly higher for both the OE and HI conditions versus the SC and habitual conditions.

Discussion:

For speakers with PD, all clear speech variants increased intelligibility to a level comparable to that of healthy controls. All clear speech variants were also associated with higher levels of speaking effort than habitual speech for the speakers with PD. Clinically, findings suggest that clear speech training programs consider using the instruction “overenunciate” for maximizing intelligibility. Future research is needed to identify if high levels of speaking effort elicited by the clear speech variants affect long-term sustainability of the intelligibility benefit.

Many patients with Parkinson's disease (PD) experience dysarthria , a neuromotor speech disorder affecting speech execution. Dysarthria often results in reduced speech intelligibility, defined as the degree to which a listener can recover the acoustic signal produced by a speaker ( Yorkston et al., 1996 ). Reductions in speech intelligibility negatively affect quality of life in patients with PD ( Miller et al., 2006 ; Spencer et al., 2020 ; van Hooren et al., 2016 ), and therefore, improving intelligibility is often a central goal of speech-language therapy ( Duffy, 2020 ; Yorkston, 2010 ). With this goal in mind, treatments focused on increasing vocal loudness in patients with PD are common (for examples, see Broadfoot et al., 2019 ; Herd et al., 2012 ; Muñoz-Vigueras et al., 2020 ; Richardson et al., 2014 ); however, other behavioral techniques that target global aspects of speech also show promise for improving intelligibility in patients with PD.

Adapting a clear style of speaking is a strategy talkers use to maximize intelligibility ( Smiljanić, 2021 ; Smiljanić & Bradlow, 2009 ; Uchanski, 2008 ) and has been widely recommended as a component of behavioral treatment protocols aimed at improving intelligibility for speakers with dysarthria, including dysarthria secondary to PD (Beukelman et al., 2002 ; Yorkston, Hakel, et al., 2007 ). A clear speech style elicits a variety of acoustic adjustments at both segmental and suprasegmental levels, resulting in intelligibility improvements in neurotypical speech ( Smiljanić, 2021 ). In addition, a growing number of studies have indicated the perceptual benefits of a clear speech style in individuals with dysarthria ( Hanson et al., 2004 ; Park et al., 2016 ; Stipancic et al., 2016 ; Tjaden et al., 2014 ). Across studies, the reported magnitude of intelligibility gains as a result of adopting a clear speaking style is variable. A possible contributing factor to this variability is the instruction or cues provided to speakers to elicit clear speech. The present work sought to evaluate whether different instructions for eliciting clear speech affect intelligibility. This project also identified an additional consideration for determining feasibility of implementing different clear speech variants: the amount of effort a speaker employs to achieve a clear speech style.

The instruction or cues for eliciting clear speech given to speakers is a possible contributing factor to the variable outcomes reported in previous literature ( Smiljanić & Bradlow, 2009 ; Uchanski, 2008 ). Commonly used instructions include, “Speak clearly, so that a hearing-impaired person would be able to understand you” ( Ferguson, 2004 ; Ferguson & Kewley-Port, 2002 ), “Talk like you are speaking to a listener with a hearing loss or who is a non-native speaker” ( Smiljanić & Bradlow, 2005 ), “Produce the items as clearly as possible, as if I am having trouble hearing or understanding you” ( Goberman & Elmer, 2005 ; Rosen et al., 2011 ; Whitfield & Goberman, 2014 ), “Pretend you are teaching words to second language learners” ( Kang & Guion, 2008 ), and “Produce this sentence as clearly as possible by overenunciating or producing it as if you were speaking to someone with a hearing impairment” ( Kuruvilla-Dugdale & Chuquilin-Arista, 2017 ). Although instructions to elicit clear speech vary widely across studies, work directly comparing the perceptual consequences of different clear speech instructions or the effects of differing instructions on speech production are sparse. Lam et al. (2012) compared a variety of acoustic measures obtained from sentences on the Assessment of Intelligibility of Dysarthric Speech ( Yorkston et al., 1984 ) produced by healthy young adults in habitual, clear (SC; “while speaking clearly”), hearing impaired (HI; “speak as if speaking to someone who has a hearing impairment”), and overenunciated (OE; “overenunciate each word”) speaking conditions. Results showed that, relative to the habitual speaking condition, the SC, HI, and OE conditions were associated with changes in measures of vowel production, speech timing, and vocal intensity, with the OE condition showing the largest acoustic changes relative to the habitual condition. This side-by-side comparison of clear speech variants indicated that the exact instruction used to elicit clear speech affects acoustic measures of speech production. A subsequent study by Lam and Tjaden (2013b) investigated whether the different instructions (the same instructions as those used in the Method section of this study) for eliciting clear speech reported by Lam et al. also affected judgments of intelligibility of the neurologically healthy speakers. The results showed that all clear speech variants were accompanied by increases in intelligibility, where the OE condition showed the largest improvements, followed by the HI and SC conditions, providing evidence that listeners are sensitive to the speech production adjustments elicited by different clear speech instructions. In addition, greater magnitudes of acoustic change in the nonhabitual conditions, including lax vowel space, articulation rate, and vocal intensity, were found to be associated with greater increases in intelligibility ( Lam & Tjaden, 2013a ). Although intelligibility differences across clear speech variants have been reported for young healthy speakers, it remains unknown if these findings will hold with clinical populations. For example, for some patients with PD, a neuromuscular constraint resulting in oromotor rigidity could limit articulatory/acoustic modulation ( Pinto et al., 2017 ; Tsao & Weismer, 1997 ; Tsao et al., 2006 ), resulting in unaltered, rather than improved, intelligibility across clear speech variants.

Although the above studies examined speech of neurologically healthy individuals, a similar approach may assist in enhancing the therapeutic use of clear speech strategies for clinical populations with dysarthria. Lam and Tjaden (2016) investigated how clear speech variants affect acoustic measures of speech in speakers with PD and healthy control speakers. Of interest were both segmental (vowel space area, first moment coefficient differences for consonant pairs, second formant slopes of diphthongs, and vowel and fricative durations) and suprasegmental (fundamental frequency [ f o ], sound pressure level [SPL], and articulation rate) acoustic measures. The results showed that the majority of acoustic measures differed between the variants of clear speech instruction and the habitual condition, but results were condition specific: The OE condition elicited the greatest magnitude of change for segmental measures (vowel space area and vowel durations) and the slowest articulation rates, whereas the HI condition elicited the greatest fricative durations and suprasegmental adjustments ( f o and SPL). The authors suggested that findings could be the result of task-specific interpretation of speech instructions. For example, the instructions used to elicit the HI condition might have led speakers to predominantly adjust suprasegmental aspects of speech at the respiratory–phonatory level, whereas the instructions used to elicit the OE condition may have led speakers to predominantly exaggerate articulatory gestures. The current study extends this line of inquiry to the perceptual consequences of different clear speech instructions for the same speakers and speech materials reported in Lam and Tjaden (2016) .

Speaker Effort

As researchers examine how to best optimize behavioral speech protocols to enhance patient outcomes, factors outside of clinician variation in implementing a given behavioral speech treatment or protocol must also be considered. One factor that may influence implementation is the sustainability, or long-term maintenance, of the behavioral modification over time. The amount of effort required to complete a task is one element that may affect sustainability. Speaking effort is defined here as the overall amount of subjective exertion, physical and/or mental, that is required to produce speech. The concept of speaker effort is addressed in Lindblom's (1990) hypo- and hyperarticulation (H&H) theory of speech production. The H&H theory describes speech production on a continuum wherein talkers adjust their speech output for the needs of their listener. Conversational speech represents the “hypoarticulate” end of the continuum, and clear speech represents the “hyperarticulate” end of the continuum. According to Lindblom, a trade-off occurs between speech clarity and economy of effort, such that talkers minimize effort during habitually produced speech and increase effort when producing clear speech. Economy of effort has also been incorporated into more contemporary models of speech production (see Guenther et al., 2006 ; Perkell et al., 2002 ). In these models, greater or smaller articulatory displacements are associated with increased or decreased effort for speech, respectively. These models posit that articulatory trajectories typically ensure sufficient perceptual contrasts while minimizing speaking effort ( Guenther & Perkell, 2004 ; Perkell et al., 2000 ). More simply, speakers will use the least amount of effort required for speech to be understood in the prevailing context.

Particularly in research examining the short-term effects of a speaking style on speech production, including the current study, evaluating the sustainability of a treatment strategy is challenging. In the biological and limb motor control literatures, authors consider how “costs” affect the utility of motor behaviors ( Morel et al., 2017 ). For example, Cos (2017) discussed “cost” as the “devaluation of the benefit associated with an option due to its associated effort expenditure” (p. 2). In other words, as the amount of effort required to perform a behavior increases, the benefit of the behavior becomes undervalued, which subsequently decreases the likelihood that the behavior will continue. Stated in the terms of the clear speech variant literature, if a clear speech variant requires too much effort on the part of the speaker, the value of the intelligibility benefit derived from the clear speaking mode is degraded, and the speech modifications may be abandoned ( Taylor et al., 2020 ). The relationship between speech production and effort is not a novel concept. Zipf (1965 ) studied the “principle of least effort” in human behavior and proposed that speakers use shorter words more frequently in discourse to economize time and effort in speech production. Therefore, balancing intelligibility benefit and speaker effort may be an important consideration for the feasibility of behavioral speech strategies and for improving adherence to treatment. Particularly for patients with PD who already have an increased sense of effort and fatigue (see reviews in Friedman et al., 2007 , 2016 ; Marr, 1991 ) and who report higher levels of effort compared to healthy controls for speech tasks and activities of daily living ( Solomon & Robin, 2005 ), implementing an overly effortful speech strategy or style may be impractical. Rather, consistent with Lindblom's (1990) economy of effort concept, a more feasible approach might be to select an instruction that elicits moderate speech change without overexerting the speaker.

Prior work from our lab found the clear speech instruction “overenunciate each word” elicited the greatest magnitude of segmental change and the slowest articulation rates, whereas “speak to someone with a hearing impairment” elicited the greatest suprasegmental adjustments for both speakers with PD and age- and sex-matched neurologically healthy controls ( Lam & Tjaden, 2016 ). The current study leveraged this database to examine whether the different clear speech instructions yield different magnitudes of intelligibility benefit, as well as different magnitudes of speaker effort. Knowledge of how these instructions affect functional communication, as well as speaker perception of effort, not only would strengthen their scientific evidence base but may also advance theoretical understanding of intelligibility and inform implementation of these techniques in clinical practice. Understanding the effect of instruction on intelligibility outcomes can also inform clear speech training programs ( Caissie et al., 2005 ; Levitt et al., 2015 ; Park et al., 2016 ) to produce benefits in time- and effort-efficient ways. To this end, the current study addressed three research questions:

  • What is the effect of different clear speech instructions on speech intelligibility in individuals with PD, as compared to neurologically healthy controls?
  • What is the effect of different clear speech instructions on speaking effort in individuals with PD, as compared to neurologically healthy controls?
  • What is the relationship of speech intelligibility to speaking effort across the clear speech instruction variants?

Speakers and Speech Materials

The study was approved by the institutional review board (IRB protocol number: 030-732229) through the University at Buffalo. All participants provided informed consent before completing study procedures. Speakers and speech materials are described in detail in Lam and Tjaden (2016) . A total of 28 speakers were recruited for the study, including 14 participants (nine men, five women) with idiopathic PD and 14 age- and sex-matched neurologically healthy control participants (nine men, five women). Demographic information for the speakers is displayed in Table 1 . Participants in both groups ranged from 55 to 81 years old, with a mean age of 68 years ( SD = 7). Pure-tone thresholds were obtained at octave frequencies between 250 and 8000 Hz for all speakers in the UB Speech and Hearing Clinic. Screening results were provided to each speaker but did not exclude speakers from participating (see Sussman & Tjaden, 2012 ). Ten speakers in each group had thresholds of 40 dB or better in at least one ear at 1, 2, and 4 kHz ( Darling & Huber, 2011 ; Weinstein & Ventry, 1983 ), with the other four participants in each group presenting with mild hearing loss. Thus, the same proportion of speakers in both groups exhibited mild hearing loss, as would be expected for this age cohort. None of the speakers used hearing aids, and all were able to follow verbal instructions.

Speaker demographic data.

Note.  SLP = speech-language pathologist; C = control speakers; PD = speakers with Parkinson's disease.

Speakers were recruited from Western New York and reported speaking American English as a first language. All speakers had achieved at least a high school diploma, reported adequate vision or corrected vision for reading, and achieved a score of 26 or better on the Mini-Mental State Examination ( Molloy, 1999 ). Control speakers denied history of neurological, speech, language, or hearing pathology. Speakers with PD reported no other history of neurological impairment other than PD, and any speech therapy received postdiagnosis was documented but did not exclude participants from the current study. Five speakers with PD reported previously participating in the Lee Silverman Voice Treatment (LSVT). Four of these individuals had completed the treatment program more than 2 years before the current study, and one speaker had completed the treatment more than a year before the current study. At the time of data collection, the speaker who had most recently completed LSVT and one other speaker with a history of LSVT were participating in weekly group therapy sessions practicing increased vocal loudness. All speakers were required to deny history of neurosurgical treatments (i.e., deep brain stimulation). Speakers were reimbursed a modest fee for participating.

To document intelligibility and speech severity, perceptual testing was completed by three certified speech-language pathologists (SLPs). All SLPs had at least 3 years of experience with dysarthria. For each speaker, 11 sentences were randomly generated using the computerized Speech Intelligibility Test (SIT; Yorkston, Beukelman, et al., 2007 ). Perceptual testing took place over one 2-hr session, and all testing was completed in a quiet room via binaural headphones (Sony MDR-V300 headphones). Presentation of speech stimuli was blocked by the speaker, and each SLP had a different random ordering of speakers. For every speaker, the speech intelligibility task was completed first, followed by the speech severity task. Procedures for the intelligibility task paralleled that of the SIT ( Yorkston, Beukelman, et al., 2007 ) such that acoustic signals produced by the speakers were not altered (i.e., normalized for intensity) before presentation to the SLPs. SLPs were presented with sentences one at a time, in order from Sentence 1 to Sentence 11, and were asked to type out the words they heard. After the intelligibility task, SLPs then completed the speech severity task.

Procedures and instructions for the speech severity task were adapted from Sussman and Tjaden (2012) . For this task, the same 11 sentences from the transcription task were played continuously for a given speaker, and SLPs were asked to judge overall severity, “paying attention to voice quality, resonance, articulatory precision, speech rhythm, prosody and naturalness…without focusing on how understandable or intelligible the person is.” After the 11 sentences were presented, SLPs were prompted to make a single judgment of overall severity using a computerized visual analog scale (VAS). SLPs were presented with a vertical line 150 mm long and asked to click anywhere along the line ranging from “no impairment” at the bottom to “severely impaired” at the top of the scale. Ratings were converted using custom software (MMScript) onto a scale from 0 to 1.0, where 0 represents no impairment and scores closer to 1.0 represent severe impairment.

Table 1 displays the SLP-judged transcription intelligibility and scaled severity scores for each participant. Mean SIT transcription scores for the PD group and control group were 98.74% ( SD = 1.43) and 97.92% ( SD = 2.42), respectively. On average, the PD group ( M = 0.27, SD = 0.19) was judged to have more severe speech than the control group ( M = 0.17, SD = 0.16). SIT scores and scaled severity ratings demonstrate that the majority of speakers had mild dysarthria. A majority of speakers anecdotally reported developing speech difficulties after being diagnosed with PD. In addition, one third of the PD group reported receiving speech therapy postdiagnosis. Therefore, despite the lack of substantial differences in SIT intelligibility or speech severity scores between speakers with PD and control speakers, speakers with PD who participated in the current study are representative of the clinical population that may pursue speech therapy.

Data Collection

Data collection for speakers occurred over two sessions. During the first session, patient history, a cognitive screening, and the audiological screening were completed, and a clinical speech sample was obtained. During the second visit, audio recordings of experimental speech stimuli were obtained. Each session lasted between 60 and 90 min in length. The first and second sessions were separated by at least 1 hr and no more than 5 days. To minimize any potential medication effects, recording sessions for speakers with PD were scheduled 1 hr after taking anti-Parkinsonian medications.

Speakers were seated in a sound-treated booth in front of a computer screen. All speech stimuli were presented one at a time using Microsoft PowerPoint. Speakers were recorded using an over-the-ear Countryman E6IOP5L2 ISOMAX condenser microphone. A mouth-to-microphone distance of 6 cm was maintained throughout the recording session. Audio samples were recorded using the M-Audio MobilePre USB preamplifier and digitized to a computer at a sampling rate of 22 kHz using TF32 ( Milenkovic, 2005 ) and Praat ( Boersma & Weenink, 2018 ). Two speakers were recorded in TF32; however, due to software complications, the remainder of the recordings were completed in Praat.

Experimental Speech Stimuli

For each speaker, experimental stimuli consisted of 18 different sentences, ranging from five to 12 words selected from the SIT ( Yorkston, Beukelman, et al., 2007 ). Fourteen different sentence sets were constructed for the 28 speakers. Therefore, each age- and sex-matched speaker pair (i.e., PD01 and C01) produced the same sentence set. For each speaker, the same set of experimental stimuli was recorded in four speaking conditions (i.e., habitual, SC, HI, and OE). The habitual condition was always recorded first. For the habitual condition, speakers were asked to read the sentences aloud. Six different orderings of the nonhabitual conditions (SC, HI, and OE) were randomized and blocked across speakers. In the SC condition, speakers were instructed to “say the following sentences while speaking clearly.” For the HI condition, speakers were asked to “say the following sentences while speaking to someone with a hearing impairment,” and for the OE condition, speakers were asked to “say the following sentences while overenunciating each word.” Written instructions for each condition were presented both visually and verbally once at the beginning and midway throughout recording for each condition. Speakers were engaged in informal conversation or provided a break in between conditions to minimize carry-over effects.

In each condition, speakers were asked to provide a rating of both physical and mental speaking effort. Both physical speaking effort and mental speaking effort were judged, as prior research suggests these domains are not strongly related ( Elbers et al., 2012 ; Friedman et al., 2007 ; Lou et al., 2001 ; Smets et al., 1995 ). Similar to procedures used in previous studies, a VAS was used to collect judgments of speaker-perceived effort ( Roh et al., 2006 ; Rudner et al., 2012 ; Solomon, 2000 ; Whitehill & Wong, 2006 ). Each VAS consisted of a vertical line 100 mm in length anchored with text at each end. In the middle and at the end of each condition, speakers were given a paper version of each VAS and asked to place a horizontal dash anywhere along the line to indicate their response. For physical speaking effort, speakers were asked to rate “Physically, how effortful was that task?” Text at the top and bottom of the scale read “a lot of physical effort” and “very little physical effort,” respectively. For mental speaking effort, speakers were asked to rate “How much were you thinking about that task?” Text at the top and bottom of the scale read “a lot of thinking” and “very little thinking,” respectively. Ratings were converted into a numerical score from 0 to 10 using the distance (in millimeters) on a ruler, with 0 representing little effort and 10 representing maximal effort. Effort scores obtained in the middle of the condition and at the end of the condition were averaged to obtain mean physical speaking effort and mean mental speaking effort scores. At odds with prior studies suggesting the concepts are not associated with each other ( Elbers et al., 2012 ; Friedman et al., 2007 ; Lou et al., 2001 ; Smets et al., 1995 ), physical speaking effort and mental speaking effort were found to be highly correlated in this cohort of speakers (Pearson's r = .89, 95% CI [.85, .93], p < .001). As such, physical speaking effort and mental speaking effort scores were averaged to obtain an overall speaking effort score for use in statistical analyses.

Perceptual Method and Procedure

Listeners. A total of 50 individuals, composed of 10 men and 40 women, with a mean age of 20 years ( SD = 1.4, range: 18–40) participated as listeners. Listeners were recruited from the student population at the University at Buffalo; spoke American English as their first language; and denied a history of neurological, speech, language, or hearing pathology. Listeners reported no more than minimal experience with communication problems secondary to neurological disease or injury (i.e., listeners who had completed a course on motor speech disorders were eligible to participate). All listeners passed a bilateral hearing screening at 20 dB HL at octave frequencies between 500 and 8000 Hz, had obtained at least a high school diploma, and were paid a modest fee for participating. Listeners were blinded to the study aims, speaker diagnoses, and speaking conditions.

Perceptual Task

SLP-judged intelligibility, as indexed by the SIT, was high for all speakers (see Table 1 ). Thus, to prevent ceiling effects, experimental stimuli were equated for overall amplitude and mixed with multitalker babble, consistent with procedures used in other clear speech studies ( Smiljanić & Bradlow, 2009 ). Although, per the acoustic study by Lam and Tjaden (2016) , the speaker groups did not differ from each other in SPL, there were differences in SPL across conditions. Thus, to control for any effect of audibility on intelligibility judgments, sentences first were equated for average root-mean-square (RMS) intensity. To accomplish this, speech waveforms were filtered with an A-weighted filter, and levels were calculated by averaging frame-by-frame (using frame durations on the order of 10 ms) RMS sample values of nonpausal portions of the speech. Each waveform was then multiplied by an appropriate gain factor so that the resulting waveforms all had the same average RMS value. Sentences were mixed with multitalker babble sampled at 22 kHz and low-pass filtered at 11 kHz ( Bochner et al., 2003 ; Frank & Craig, 1984 ). Based on pilot testing, a signal-to-noise ratio (SNR) of −1 dB was applied to each sentence, as this SNR minimized floor and ceiling effects in the intelligibility task. Sentences were presented via Sony Dynamic Stereo headphones (MDR-V300) at 75 dB. The dB level of stimuli was calibrated at the beginning of each listening session for five randomly selected experimental sentences using an earphone coupler and a Quest Electronics 1700 sound-level meter.

Each listener transcribed the 18 SIT sentences produced in one condition by each of the 28 speakers (504 sentences). In this manner, each stimulus was transcribed by 10 listeners, and a given listener only heard a particular SIT sentence twice (i.e., once for a PD speaker and once for a control speaker). Every listener heard a relatively equal number of conditions from each group (i.e., PD and control), and every condition for every speaker was heard by 10 different listeners.

Listeners were seated in a sound-treated booth in front of a computer. For the transcription task, listeners heard each sentence once and typed their response onto a computer using custom software. The task was self-paced, and participants followed the computer prompts to deliver each subsequent stimulus. A practice task preceded the experiment to familiarize listeners with the computer interface. Percent words correct was calculated by tabulating the number of words correctly transcribed, dividing by the number of target words, and multiplying by 100. Percent words correct was averaged across the 18 SIT sentences for each of the 10 listeners per speaker to derive an overall intelligibility score for each speaker in each condition.

Each listener also judged a random selection of about 50 stimuli (i.e., ~10% of the data) twice for the purpose of determining intralistener reliability. Reliability was calculated by summing the number of words that were transcribed the same between the first and second presentations of a given stimuli for a given listener. A ratio was calculated between the number of words that overlapped between the two presentations and the total number of words in the stimulus to obtain the percentage of overlap between the two transcriptions. The percentage of overlap for each of the 50 repeated stimuli for a given listener was averaged to obtain an overall reliability percentage per listener. Reliability percentages ranged from 46% to 81%, with an average of 64% across all listeners, which is comparable to the reliability of similar tasks (i.e., transcription of speech in noise) reported previously ( Stipancic et al., 2016 ).

Data Analysis

Statistical analyses were completed using SAS statistical software (Version 9.4, SAS Institute Inc.) and R ( R Development Core Team, 2013 ). To address Research Questions 1 and 2, separate linear mixed-effects (LME) models were fit to intelligibility data (Research Question 1: intelligibility across clear speech variants) and speaking effort data (Research Question 2: speaking effort across clear speech variants) in this repeated-measures design, with fixed effects of group (i.e., PD and control) and condition (i.e., habitual, SC, HI, and OE) and a random effect of speaker. For the intelligibility model, listener was also included as a random effect. Post hoc pairwise comparisons were made in conjunction with a Bonferroni correction for multiple testing. All tests were two-sided and tested at a .05 nominal significance level. To address Research Question 3, regarding the relationship between intelligibility and speaking effort, Pearson's correlations were used to assess the relationship between intelligibility and speaking effort in both groups of speakers (i.e., PD and control) and across the group of speakers as a whole.

Research Question 1: Intelligibility Across Clear Speech Variants

The intelligibility LME model results are displayed in Table 2 . Results revealed a significant effect of condition, F (3, 1047) = 33.82, p < .001, and a significant Group × Condition interaction, F (3, 1047) = 11.39, p < .001. There was no significant group effect, F (1, 1047) = 2.79, p = .095. Overall, there were significant differences between intelligibility in the habitual condition ( M = 68.66%, SD = 16.13) and the SC ( M = 73.82%, SD = 14.03, p < .001), HI ( M = 71.37%, SD = 16.86, p < .001), and OE ( M = 76.34%, SD = 13.14, p < .001) conditions. There were also significant differences in intelligibility between the SC and OE conditions ( p = .008) and the HI and OE conditions ( p < .001). There was no significant difference between the SC and HI conditions ( p = .46).

Results of the linear mixed-effects model for Research Question 1: sentence intelligibility across speaking groups (Parkinson's disease and controls) and conditions (habitual, clear, hearing impaired, and overenunciate).

Note.  Num df = number of degrees of freedom in the model; Den df = number of degrees of freedom associated with the model errors.

Figure 1 displays boxplots of intelligibility across the four conditions for each group of speakers. Intelligibility was higher for the control group ( M = 75.25%, SD = 7.92) than the PD group ( M = 62.07%, SD = 19.60) only in the habitual condition ( p = .013). For the control group, intelligibility was significantly different between the habitual condition ( M = 75.25%, SD = 7.92) and the OE condition ( M = 79.84%, SD = 8.61, p = .006), as well as between the HI ( M = 75.01%, SD = 12.50, p = .003) and OE conditions. For the PD group, intelligibility was significantly lower for the habitual condition ( M = 62.07%, SD = 19.60) as compared to the SC ( M = 69.23%, SD = 17.22, p < .001), HI ( M = 67.73%, SD = 20.13, p < .001), and OE ( M = 72.84%, SD = 16.06, p < .001) conditions. For the PD group, there were also significant differences in intelligibility between the SC and OE conditions ( p = .008) and between the HI and OE conditions ( p = .010). No other comparisons were significant.

An external file that holds a picture, illustration, etc.
Object name is AJSLP-31-2789-g001.jpg

Percent intelligibility across the clear speech variants in the control and Parkinson's disease (PD) groups. * p < .05, ** p < .001; line within each box = median, hinges = 25th and 75th percentiles, whiskers = 1.5 × interquartile range, red line = comparisons between groups, teal lines = comparisons between conditions for the control group, and purple lines = comparisons between conditions for the PD group. Note that the outliers have been removed from the figure for ease of interpretation.

Research Question 2: Speaking Effort Across Clear Speech Variants

The speaking effort LME model results are displayed in Table 3 . Results revealed a significant effect of condition, F (3, 78) = 30.67, p < .001, but not a significant group effect, F (1, 78) = 0.68, p = .411, or a Group × Condition interaction, F (3, 78) = 0.23, p = .877. Speaking effort was significantly higher in the OE condition ( M = 54.55, SD = 13.14) as compared to the SC condition ( M = 40.52, SD = 14.03, p < .001) and the habitual condition ( M = 28.38, SD = 16.13, p < .001), but not as compared to the HI condition ( M = 53.27, SD = 16.86, p = 1.00). Speaking effort was also significantly different between the SC and HI conditions ( p < .001).

Results of the linear mixed-effects model for Research Question 2: speaking effort across speaking groups (Parkinson's disease and controls) and conditions (habitual, clear, hearing impaired, and overenunciate).

Figure 2 displays boxplots of speaking effort across the four conditions for each group of speakers. For the control group, speaking effort was significantly different between the habitual condition ( M = 26.39, SD = 7.92) and both the HI condition ( M = 49.89, SD = 12.50, p < .001) and the OE condition ( M = 50.21, SD = 8.61, p < .001), but not the SC condition ( M = 36.43, SD = 8.17, p = .319). The control group also had significantly different speaking effort scores between the SC and HI ( p = .040) and OE ( p = .032) conditions. No other comparisons were significant for the control group. For the PD group, speaking effort was significantly different between the habitual condition ( M = 30.36, SD = 19.60) and all other conditions (SC: M = 36.43, SD = 17.22, p = .023; HI: M = 49.89, SD = 20.13, p < .001; OE: M = 50.21, SD = 16.06, p < .001). The PD group also had significant differences in speaking effort between the SC and OE conditions ( p = .023). No other comparisons were significant for the PD group.

An external file that holds a picture, illustration, etc.
Object name is AJSLP-31-2789-g002.jpg

Speaking effort across the clear speech variants in the control and Parkinson's disease (PD) groups. * p < .05, ** p < .001; line within each box = median, hinges = 25th and 75th percentiles, whiskers = 1.5 × interquartile range, teal lines = comparisons between conditions for the control group, and purple lines = comparisons between conditions for the PD group.

Research Question 3: Relationship Between Intelligibility and Speaking Effort

Figure 3 presents a scatter plot examining the relationship between intelligibility and speaking effort. There was a small but significant negative correlation between intelligibility and speaking effort when the two groups of participants and the four speaking conditions were pooled, r (110) = −.29, p < .001, such that as speaking effort increased, intelligibility decreased. For the control group alone, there was no relationship between intelligibility and speaking effort, r (54) = .007, p = .96. For the PD group alone, there was a significant negative correlation between intelligibility and speaking effort, r (54) = −.38, p = .004. Upon further inspection of the data, the correlation in the PD group appeared to be driven by data for a single speaker (PD14; red circle in Figure 3 ) who had the lowest intelligibility and highest speaking effort scores. When this speaker was removed from the correlation analysis, the association between intelligibility and speaking effort was not significant, r (50) = −.20, p = .15.

An external file that holds a picture, illustration, etc.
Object name is AJSLP-31-2789-g003.jpg

Relationship between intelligibility and speaking effort in the control group and Parkinson's disease (PD) group across conditions. Data points from the most severe speaker (PD14) are circled in red.

Figure 4 illustrates the relationship between intelligibility and speaking effort within each of the 28 individual participants (similar to an approach used by Turner et al., 1995 ). The small number of data points precludes meaningful quantitative treatment of the data. However, visual inspection of Figure 4 reveals that, for the majority of control participants, there was either (a) a lack of variability in either measure (i.e., intelligibility or speaker effort), which is consistent with the lack of significant association between these measures in the control group, or (b) the fact that changes in one measure did not necessarily correspond to changes in the other. For example, C01, C05, C06, C08, and C10 all appear to have changes in speaking effort across the conditions that were not associated with changes in intelligibility. However, for the PD group, there are a few patterns that emerged. First, for the majority of participants (11/14, 79%), the habitual condition yielded the lowest magnitude of speaking effort, with higher speaking effort being elicited by the nonhabitual conditions. One pattern is exhibited well by PD01, PD02, PD03, PD06, PD08, and PD13, who demonstrate increased intelligibility associated with increased effort. The opposite pattern (i.e., decreased intelligibility associated with increased effort) is visible for PD11. A final pattern displayed by PD04, PD07, PD09, PD10, PD12, and PD14 resembles many control speakers, in that there is no clear association between the two measures.

An external file that holds a picture, illustration, etc.
Object name is AJSLP-31-2789-g004.jpg

Relationship between intelligibility and speaking effort across conditions in individual participants. C = control speakers; PD = speakers with Parkinson's disease.

The current study sought to determine the impact of various instructions for eliciting clear speech on intelligibility and speaking effort for a group of speakers with PD and age- and sex-matched neurologically healthy controls. In addition, this study examined the relationship between intelligibility and speaking effort. Three main findings emerged: (a) Intelligibility was maximized in the OE condition, especially for speakers with PD; (b) speaking effort was highest in the OE condition; and (c) the relationship between intelligibility and speaking effort is complex. The following sections consider each of these findings as well as potential clinical/research implications.

Intelligibility Was Maximized in the OE Condition

Overall, the OE condition yielded the highest intelligibility gains relative to the habitual condition for both speaker groups. For the control group, intelligibility was highest in the OE condition, which was matched by intelligibility in the SC condition. For the PD group, intelligibility was statistically highest in the OE condition, as compared to each of the other three conditions (i.e., habitual, SC, and HI). Importantly, for these mildly impaired speakers with PD, all of the clear speech instructions yielded intelligibility levels that were comparable to the typical or habitual speech of age- and sex-matched controls. For the PD group, the magnitude of intelligibility gain from the habitual condition to all three clear speech conditions (i.e., SC, HI, and OE) exceeds the threshold for detectable change in intelligibility (see Stipancic & Tjaden, 2022 ; Stipancic et al., 2018 ) and thus represents meaningful changes. For speakers who have intelligibility scores ≥ 96% (as the SLP-judged intelligibility scores in Table 1 indicate for the speakers in this study), the threshold for detectable change of intelligibility has been previously reported to be ~3% (see Stipancic & Tjaden, 2022 ; however, it should be noted that detectable change of intelligibility obtained in the presence of background noise has not been established to date). The finding of optimized intelligibility in the OE condition also is consistent with previous work demonstrating that intelligibility was maximized for young, neurologically healthy controls given instructions to overenunciate as compared to “speak clearly” and “speak to someone with a hearing impairment” ( Lam & Tjaden, 2013b ).

Despite the fact that intelligibility scores derived from SLPs' transcriptions of SIT recordings in quiet were near ceiling (i.e., 98.7% for the control group and 97.9% for the PD group), intelligibility scores in the habitual condition in the presence of background noise were significantly lower for both groups (i.e., 75.25% for the control group and 62.07% for the PD group). This finding illustrates how intelligibility in quiet may not translate into intelligibility in the presence of multitalker babble. Importantly, the acoustic study by Lam and Tjaden (2016) indicated that there were no differences in SPL between these groups of speakers, providing evidence that equalizing the stimuli for SPL, and thus, audibility, did not influence current results. In addition, although SIT scores obtained from SLPs in quiet were equivalent between the control group and the PD group, the PD group had significantly worse intelligibility than the control group in the habitual condition in the presence of noise. This finding is consistent with previous work ( Chiu & Forrest, 2018 ) and demonstrates how background noise is especially detrimental to the intelligibility of speakers with PD.

Results also suggest that although the OE condition appeared to facilitate the highest intelligibility scores on average in this cohort of speakers with PD, any of the instructions explored could be useful for improving intelligibility in individual speakers with PD. Visual inspection of Figure 4 provides additional support for this claim. For example, although many of the speakers with PD (8/14 = 57%) experienced their highest intelligibility in the OE condition, three speakers with PD (21%) experienced their highest intelligibility (or comparable to another condition) in the HI condition and five speakers with PD (36%) experienced their highest intelligibility (or comparable to another condition) in the SC condition. Importantly, only one speaker with PD (PD11) had their highest intelligibility in the habitual condition, with all clear speech variants eliciting lower intelligibility, whereas the other 13 speakers with PD all experienced an improvement in intelligibility with at least one of the clear speech variants. Therefore, although not desirable from the standpoint of efficiency, clinicians could trial various instructions for maximizing intelligibility in individual patients. For example, by combining the current findings with results in Lam and Tjaden (2016) , it could be hypothesized that for speakers with PD who have predominant impairments in articulation, instructions to overenunciate may be the most beneficial for intelligibility; in contrast, for speakers with PD who have predominant impairments in suprasegmental aspects of speech, instructions used in the HI condition may be the most useful. In combination with the results in Lam and Tjaden (2016) , the current results further suggest the hypothesis that change in articulation and duration are key acoustic variables explaining intelligibility change in mildly impaired speakers with PD when the effect of speech intensity is held constant. Relatedly, Gravelin and Whitfield (2019) found that speakers with PD exhibited a smaller reduction in speaking rate than control speakers under a clear speech condition elicited by the instructions “speak as clearly as possible, as if someone was having trouble hearing or understanding you.” The authors posited that this may be the result of their instructions for eliciting clear speech and that instructions to overenunciate may have promoted larger articulatory adjustments in the speakers with PD ( Gravelin & Whitfield, 2019 ).

Speaking Effort Was Highest in the OE Condition

Across all speakers, all nonhabitual speaking conditions (i.e., SC, HI, and OE) elicited larger magnitudes of speaking effort than the habitual condition. Overall, the OE condition tended to elicit the greatest ratings of effort, followed by the HI condition (although this contrast was not statistically significant) and the SC condition. High levels of speaking effort in the OE condition, as related to findings by Lam and Tjaden (2016) of increased vowel space area, lengthened vowel durations, and slower articulation rates in the OE condition, are consistent with the H&H theory that increased effort would be associated with greater articulatory excursions ( Lindblom, 1990 ). Indeed, other authors have reported evidence of increased speaking effort under clear speech conditions. In a group of young, neurologically healthy controls, Whitfield et al. (2021) found that tracking accuracy in a concurrently performed visuomotor tracking task decreased when participants adopted a clear speaking style relative to when they used their habitual speech. This finding was interpreted as evidence that the adoption of a clear speaking style requires greater attentional resources than habitual speech ( Whitfield et al., 2021 ). These findings are consistent with the current findings in that use of a clear speaking style yielded higher levels of perceived effort. Keerstock and Smiljanić (2021) also provided evidence that a clear speaking style requires more effort on the part of the speaker than habitual speech. Their results indicated that a speaker's recall of their own speech was worse for sentences produced using a clear speaking style than for sentences produced in conversational speech. The authors suggested that this finding was due to the increased effort required by the clear speech task (elicited by the instructions “Read this sentence clearly and carefully, as if talking to a non-native speaker of English or a person with hearing loss,” pp. 3389–3390). Authors suggested that the clear speech task was “resource demanding” (p. 3395), and thus, diverted cognitive resources necessary to successfully complete the memory task ( Keerstock & Smiljanić, 2021 ). In contrast, when listeners are asked to recall speech produced by others, they experience a memory/recall benefit for speech produced in a clear speaking style ( Gilbert et al., 2014 ; Keerstock & Smiljanić, 2018 , 2019 ; Van Engen et al., 2012 ). This speaker–listener dichotomy in benefits derived from a clear speaking style highlights the value of selecting and evaluating the suitability of treatment approaches from the perspective of both the speaker and the listener. For example, when evaluating the costs of a motor behavior, or speech in particular ( Cos, 2017 ; Morel et al., 2017 ; Whitfield et al., 2021 ), it should be established from whose perspective the cost is determined (i.e., speaker vs. listener; Olmstead et al., 2020 ).

Similar to intelligibility findings, individual speakers' judgments of perceived effort varied across the clear speech instructions. Half of the control speakers (50%) and six of the PD speakers (43%) reported the highest values of speaking effort in the OE condition, whereas five of the control speakers (38%) and four of the PD speakers (29%) reported the highest values of speaker effort in the HI condition (see Figure 4 ). Interestingly, PD14, who had the lowest intelligibility score of all speakers in the habitual condition (and, therefore, was the most severely impaired; see Stipancic et al., 2021 ), reported constant levels of speaking effort across all of the speaking conditions, including the habitual condition. This suggests a ceiling effect in which this participant was already working at the limit of their physiologic, functional reserve ( Plowman, 2015 ) when using their typical speaking pattern.

The Relationship Between Intelligibility and Speaking Effort Is Complex

Overall, speakers with PD did not judge implementing variants of clear speech instruction to be significantly more effortful than the control group, as evidenced by a nonsignificant group effect in the LME model for speaking effort. In addition, for the control group, there was no relationship between intelligibility and speaking effort, despite a large range of speaking effort scores (as shown in Figures 3 and ​ and4). 4 ). We hypothesize that the lack of relationship between intelligibility and speaking effort represents a ceiling effect, as controls were highly intelligible across all conditions. There was, however, a statistically significant, negative relationship between intelligibility and speaking effort for the PD group, indicating that as speaking effort increased, intelligibility decreased. This negative relationship appeared to be largely driven by the most severely impaired speaker (i.e., PD14), with the lowest intelligibility scores, and therefore, future work with a larger sample size is needed to corroborate these findings. Examining the relationship between intelligibility and speaking effort within individual participants revealed more nuanced patterns (see Figure 4 ). For example, some participants with PD (8/14 = 57%; PD01, PD02, PD04, PD05, PD06, PD08, PD09, PD13) showed a clear increase in intelligibility that coincided with an increase in speaking effort. This pattern is consistent with the H&H theory ( Lindblom, 1990 ) in that greater effort facilitates increased acoustic contrasts at the hyperarticulate end of the continuum, resulting in improved intelligibility. In contrast, one speaker with PD (i.e., PD11) exhibited the exact opposite relationship, as intelligibility declined with increases in speaking effort. Interestingly, the negative relationship between intelligibility and speaking effort for PD11 was not predicted by overall dysarthria severity, as may be hypothesized. PD11 had similar intelligibility in the habitual condition to many other speakers with PD who demonstrated a positive relationship between intelligibility and speaking effort. A few authors in the rehabilitation sciences literature have noted this dissociation between effort and performance, such that increased effort can be unproductive for improving motor performance (see Bruya, 2010 , for a review) or that what is most important is the type of effort that is being expended (see Hodges & Lohse, 2020 , for a review). Future work should explore predictors of the relationship between effort and motor performance, particularly as it relates to speech production.

Although the motor execution–effort relationship has received much more attention in the limb literature as compared to the speech literature, the relationship between motor execution and effort is, nevertheless, not well defined in the limb literature. Generally, novel motor skills are thought to be associated with a requirement for increased effort, whereas skilled movements tend to be associated with less effort, automaticity, and high efficiency (see Bruya, 2010 ; Hodges & Lohse, 2020 ; Sparrow, 1983 ; Wulf & Lewthwaite, 2016 , for reviews). Therefore, clinically, maintaining an intelligibility benefit while also reducing the amount of effort perceived by the patient may be a relevant goal. As such, speaking effort may be a relevant outcome measure for determining treatment efficacy. For example, in treatments aiming to improve intelligibility, it may be beneficial to observe a decrease in effort over time while maintaining intelligibility ( Richardson et al., 2022 ). Subjective measures of effort could serve as a proxy for task automaticity over time ( Whyte et al., 2019 ). For example, a recent study examined physical and mental demand of patients with PD during two interventions aimed at increasing vocal loudness. In this study, Richardson et al. (2022) suggested that changes in measures of physical and mental demand may reflect differences in “treatment burden” (pp. 10, 11). In particular, the authors found that, for one of the voice treatments, ratings of physical effort declined over time while performance improved, which was hypothesized to be the result of treatment-related motor adaptation and/or physical muscle conditioning ( Richardson et al., 2022 ). Relatedly, the physical therapy literature suggests that there may be a level of effort that is optimal for motor learning and performance (see Bruya, 2010 ; Hodges & Lohse, 2020 , for reviews). Therefore, there may be a range of speaking effort that is advantageous for producing the most intelligible speech or for facilitating optimized learning of a new speaking style. Further research on this topic is warranted.

The approach illustrated in Figure 4 may be useful for determining the optimal instructions for eliciting clear speech in an individual patient. As an example, the figure shows that PD01 was 60% intelligible in the habitual condition and experienced the largest intelligibility gain under the OE condition (78.6% intelligibility, a difference of 18.6%). However, the OE condition elicited the highest amount of speaking effort (a score of 76 as compared to a score of 21.5 in the habitual condition). Perhaps, in this case, the SC speaking condition, which still yielded a large, clinically significant improvement in intelligibility (77.5% intelligibility, a difference of 17.5% from the habitual condition) but a smaller increase in speaking effort (a score of 44.5 as compared to 21.5 in the habitual condition), may be more sustainable. Another possibility is that a clinician could select the OE condition for this speaker but monitor perceived speaking effort over therapy sessions to assess the sustainability of the newly adopted speaking style. If, for example, effort remains at a high level, trialing an alternative, less effortful speaking style may be appropriate. This begs the question: How much effort is too much effort? If, for example, a speaking condition maximizes intelligibility but also requires the greatest amount of effort (as in the current findings under the OE condition), it is important to evaluate a speaker's willingness to invest the additional effort required to achieve high intelligibility and the effect that this has on adherence to therapy/use of the speaking style. Conversations with patients regarding anticipated outcomes (i.e., increased intelligibility vs. high amounts of effort) may be a good starting point. This discussion is related to the concepts of economy of effort and the cost versus utility of motor behaviors. Especially for patients with PD who experience heightened levels of fatigue and effort at baseline, the costs associated with high-effort therapy tasks may result in an undervaluation of an intelligibility benefit and the abandonment (or effort economization) of a particular speaking style. These factors are critical to determining therapeutic efficacy in this population. The interaction of intelligibility and speaking effort and their combined effect on therapy adherence require additional research in the future.

Limitations and Future Directions

Some researchers have noted that speakers with PD have sensory perceptual difficulties ( Fox & Ramig, 1997 ; Ho et al., 2000 ; Sapir et al., 2011 ), which could, theoretically, impact subjective impressions of speaking effort. However, speakers with PD in the current study had perceived physical and mental speaking effort comparable to controls, which may suggest that speakers with PD did not have difficulty with rating effort. Although sensory perceptual deficits are important to consider when studying the PD population, it is beyond the scope of the current work and should be considered in future studies.

To date, no standardized definitions or metrics of “clear speech effort” or “speaking effort” have been established. Therefore, further research is needed to determine the reliability and validity of the VAS ratings used to capture speaking effort during clear speech. In the current study, reliability of the speaking effort measures was not obtained. Reliability of these measures will be important for work establishing thresholds for a clinically detectable and significant change (see Stipancic & Tjaden, 2022 ; Stipancic et al., 2018 ). Future work could examine variability in perceived effort as related to variability in intelligibility across longer speech materials (similar to the work by van Brenk et al., 2022 ) to better define the relationship between speaking effort and sustainability of a speaking style.

Last, the methods utilized in this study (e.g., highly controlled acoustic recordings of sentences read aloud, lab listening conditions) are not representative of natural communication. Studies are needed to determine the effect of different speaking conditions on intelligibility and speaker effort in more ecologically valid settings. Future work should also consider other factors that impact the sustainability of behavioral speech protocols, such as attention ( Whitfield et al., 2021 ; Wulf & Lewthwaite, 2016 ), memory ( Keerstock & Smiljanić, 2021 ), fatigue ( Friedman et al., 2007 ; Marr, 1991 ), and motivation ( Maclean & Pound, 2000 ; Wulf & Lewthwaite, 2016 ), as well as using speaker effort as a supplement to listener-derived speech outcomes (i.e., intelligibility). These factors will be relevant for appraising the cost of behavioral therapies from the perspective of speakers and will be useful for examining the sustainability of therapies in the long term. Overall, the current results suggest that future work should consider standardizing the instructions used to elicit clear speech, as variability in instructions limits the ability to compare outcomes across studies. Last, current findings highlight the importance of considering instruction when making direct comparisons between clear speech studies.

Data Availability Statement

Acknowledgments.

This work was supported by National Institute on Deafness and Other Communication Disorders Grant R01DC004689 (Principal Investigator: Kris Tjaden). The authors thank all speaker and listener participants as well as past students for their contributions to this work: Jennifer Lam, Caroline Brown, Rebecca Jaffe, Heidi Kelleher, and Sara Silverman. Portions of this study were presented at the Biennial Conference on Motor Speech in 2018.

Funding Statement

This work was supported by National Institute on Deafness and Other Communication Disorders Grant R01DC004689 (Principal Investigator: Kris Tjaden). Portions of this study were presented at the Biennial Conference on Motor Speech in 2018.

  • Beukelman, D. R. , Fager, S. , Ullman, C. , Hanson, E. , & Logemann, J. (2002). The impact of speech supplementation and clear speech on the intelligibility and speaking rate of people with traumatic brain injury . Journal of Medical Speech-Language Pathology , 10 ( 4 ), 237–242. [ Google Scholar ]
  • Bochner, J. H. , Garrison, W. M. , Sussman, J. E. , & Burkard, R. F. (2003). Development of materials for the clinical assessment of speech recognition: The speech sound pattern discrimination test . Journal of Speech, Language, and Hearing Research , 46 ( 4 ), 889–900. https://doi.org/10.1044/1092-4388(2003/069) [ PubMed ] [ Google Scholar ]
  • Boersma, P. , & Weenink, D. (2018). Praat: Doing phonetics by computer (Version 6.0.43) [Computer software] . http://www.praat.org/
  • Broadfoot, C. K. , Abur, D. , Hoffmeister, J. D. , Stepp, C. E. , & Ciucci, M. R. (2019). Research-based updates in swallowing and communication dysfunction in Parkinson disease: Implications for evaluation and management . Perspectives of the ASHA Special Interest Groups , 4 ( 5 ), 825–841. https://doi.org/10.1044/2019_pers-sig3-2019-0001 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Bruya, B. (2010). Effortless attention: A new perspective in the cognitive science of attention and action (1st ed.). MIT Press. https://doi.org/10.7551/mitpress/9780262013840.001.0001 [ Google Scholar ]
  • Caissie, R. , Campbell, M. M. , Frenette, W. L. , Scott, L. , Howell, I. , & Roy, A. (2005). Clear speech for adults with a hearing loss: Does intervention with communication partners make a difference? Journal of the American Academy of Audiology , 16 ( 3 ), 157–171. https://doi.org/10.3766/jaaa.16.3.4 [ PubMed ] [ Google Scholar ]
  • Chiu, Y.-F. , & Forrest, K. (2018). The impact of lexical characteristics and noise on intelligibility of Parkinsonian speech . Journal of Speech, Language, and Hearing Research , 61 ( 4 ), 837–846. https://doi.org/10.1044/2017_JSLHR-S-17-0205 [ PubMed ] [ Google Scholar ]
  • Cos, I. (2017). Perceived effort for motor control and decision-making . PLOS Biology , 15 ( 8 ), Article e2002885. https://doi.org/10.1371/journal.pbio.2002885 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Darling, M. , & Huber, J. E. (2011). Changes to articulatory kinematics in response to loudness cues in individuals with Parkinson's disease . Journal of Speech, Language, and Hearing Research , 54 ( 5 ), 1247–1259. https://doi.org/10.1044/1092-4388(2011/10-0024) [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Duffy, J. R. (2020). Motor speech disorders: Substrates, differential diagnosis, and management (4th ed.). Mosby. [ Google Scholar ]
  • Elbers, R. G. , van Wegen, E. E. H. , Verhoef, J. , & Kwakkel, G. (2012). Reliability and structural validity of the Multidimensional Fatigue Inventory (MFI) in patients with idiopathic Parkinson's disease . Parkinsonism & Related Disorders , 18 ( 5 ), 532–536. https://doi.org/10.1016/j.parkreldis.2012.01.024 [ PubMed ] [ Google Scholar ]
  • Ferguson, S. H. (2004). Talker differences in clear and conversational speech: Vowel intelligibility for normal-hearing listeners . The Journal of the Acoustical Society of America , 116 ( 4 ), 2365–2373. https://doi.org/10.1121/1.1788730 [ PubMed ] [ Google Scholar ]
  • Ferguson, S. H. , & Kewley-Port, D. (2002). Vowel intelligibility in clear and conversational speech for normal-hearing and hearing-impaired listeners . The Journal of the Acoustical Society of America , 112 ( 1 ), 259–271. https://doi.org/10.1121/1.1482078 [ PubMed ] [ Google Scholar ]
  • Fox, C. M. , & Ramig, L. O. (1997). Vocal sound pressure level and self-perception of speech and voice in men and women with idiopathic Parkinson disease . American Journal of Speech-Language Pathology , 6 ( 2 ), 85–94. https://doi.org/10.1044/1058-0360.0602.85 [ Google Scholar ]
  • Frank, T. , & Craig, C. H. (1984). Comparison of the Auditec and Rintelmann recordings of the NU-6 . Journal of Speech and Hearing Disorders , 49 ( 3 ), 267–271. https://doi.org/10.1044/jshd.4903.267 [ PubMed ] [ Google Scholar ]
  • Friedman, J. H. , Beck, J. C. , Chou, K. L. , Clark, G. , Fagundes, C. P. , Goetz, C. G. , Herlofson, K. , Kluger, B. , Krupp, L. B. , Lang, A. E. , Lou, J. S. , Marsh, L. , Newbould, A. , & Weintraub, D. (2016). Fatigue in Parkinson's disease: Report from a multidisciplinary symposium . npj Parkinson's Disease , 2 ( 1 ), 1–6. https://doi.org/10.1038/npjparkd.2015.25 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Friedman, J. H. , Brown, R. G. , Comella, C. , Garber, C. E. , Krupp, L. B. , Lou, J. S. , Marsh, L. , Nail, L. , Shulman, L. , Taylor, C. B. , & Working Group on Fatigue in Parkinson's Disease. (2007). Fatigue in Parkinson's disease: A review . Movement Disorders , 22 ( 3 ), 297–308. https://doi.org/10.1002/mds.21240 [ PubMed ] [ Google Scholar ]
  • Gilbert, R. C. , Chandrasekaran, B. , & Smiljanić, R. (2014). Recognition memory in noise for speech of varying intelligibility . The Journal of the Acoustical Society of America , 135 ( 1 ), 389–399. https://doi.org/10.1121/1.4838975 [ PubMed ] [ Google Scholar ]
  • Goberman, A. M. , & Elmer, L. W. (2005). Acoustic analysis of clear versus conversational speech in individuals with Parkinson disease . Journal of Communication Disorders , 38 ( 3 ), 215–230. https://doi.org/10.1016/j.jcomdis.2004.10.001 [ PubMed ] [ Google Scholar ]
  • Gravelin, A. C. , & Whitfield, J. A. (2019). Effect of clear speech on the duration of silent intervals at syntactic and phonemic boundaries in the speech of individuals with Parkinson disease . American Journal of Speech-Language Pathology , 28 ( 2S ), 793–806. https://doi.org/10.1044/2018_AJSLP-MSC18-18-0102 [ PubMed ] [ Google Scholar ]
  • Guenther, F. H. , Ghosh, S. S. , & Tourville, J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production . Brain and Language , 96 ( 3 ), 280–301. https://doi.org/10.1016/j.bandl.2005.06.001 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Guenther, F. H. , & Perkell, J. S. (2004). A neural model of speech production and supporting experiments . In Slifka J., Manuel S., & Matthies M. (Eds.), From sound to sense: 50+ years of discoveries in speech communication , (pp. 98–106).Massachusetts Institute of Technology. [ Google Scholar ]
  • Hanson, E. K. , Yorkston, K. M. , & Beukelman, D. R. (2004). Speech supplementation techniques for dysarthria: A systematic review . Journal of Medical Speech-Language Pathology , 12 ( 2 ), ix–xxix. [ Google Scholar ]
  • Herd, C. P. , Tomlinson, C. L. , Deane, K. H. O. , Brady, M. C. , Smith, C. H. , Sackley, C. M. , Clarke, C. E. , & Cochrane Movement Disorders Group. (2012). Comparison of speech and language therapy techniques for speech problems in Parkinson's disease . Cochrane Database of Systematic Reviews , 2017 ( 8 ), CD002814. https://doi.org/10.1002/14651858.CD002814.pub2 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ho, A. K. , Bradshaw, J. L. , & Iansek, R. (2000). Volume perception in Parkinsonian speech . Movement Disorders , 15 ( 6 ), 1225–1131. https://doi.org/10.1002/1531-8257(200011)15:6<1125::AID-MDS1010>3.0.CO;2-R [ PubMed ] [ Google Scholar ]
  • Hodges, N. J. , & Lohse, K. R. (2020). Difficulty is a real challenge: A perspective on the role of cognitive effort in motor skill learning . Journal of Applied Research in Memory and Cognition , 9 ( 4 ), 455–460. https://doi.org/10.1016/j.jarmac.2020.08.006 [ Google Scholar ]
  • Kang, K.-H. , & Guion, S. G. (2008). Clear speech production of Korean stops: Changing phonetic targets and enhancement strategies . The Journal of the Acoustical Society of America , 124 ( 6 ), 3909–3917. https://doi.org/10.1121/1.2988292 [ PubMed ] [ Google Scholar ]
  • Keerstock, S. , & Smiljanić, R. (2018). Effects of intelligibility on within- and cross-modal sentence recognition memory for native and non-native listeners . The Journal of the Acoustical Society of America , 144 ( 5 ), 2871–2881. https://doi.org/10.1121/1.5078589 [ PubMed ] [ Google Scholar ]
  • Keerstock, S. , & Smiljanić, R. (2019). Clear speech improves listeners' recall . The Journal of the Acoustical Society of America , 146 ( 6 ), 4604–4610. https://doi.org/10.1121/1.5141372 [ PubMed ] [ Google Scholar ]
  • Keerstock, S. , & Smiljanić, R. (2021). Reading aloud in clear speech reduces sentence recognition memory and recall for native and non-native talkers . The Journal of the Acoustical Society of America , 150 ( 5 ), 3387–3398. https://doi.org/10.1121/10.0006732 [ PubMed ] [ Google Scholar ]
  • Kuruvilla-Dugdale, M. , & Chuquilin-Arista, M. (2017). An investigation of clear speech effects on articulatory kinematics in talkers with ALS . Clinical Linguistics & Phonetics , 31 ( 10 ), 725–742. https://doi.org/10.1080/02699206.2017.1318173 [ PubMed ] [ Google Scholar ]
  • Lam, J. , & Tjaden, K. (2013a). Acoustic-perceptual relationships in variants of clear speech . Folia Phoniatrica et Logopaedica , 65 ( 3 ), 148–153. https://doi.org/10.1159/000355560 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lam, J. , & Tjaden, K. (2013b). Intelligibility of clear speech: Effect of instruction . Journal of Speech, Language, and Hearing Research , 56 ( 5 ), 1429–1440. https://doi.org/10.1044/1092-4388(2013/12-0335) [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lam, J. , & Tjaden, K. (2016). Clear speech variants: An acoustic study in Parkinson's disease . Journal of Speech, Language, and Hearing Research , 59 ( 4 ), 631–646. https://doi.org/10.1044/2015_JSLHR-S-15-0216 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lam, J. , Tjaden, K. , & Wilding, G. (2012). Acoustics of clear speech: Effect of instruction . Journal of Speech, Language, and Hearing Research , 55 ( 6 ), 1807–1821. https://doi.org/10.1044/1092-4388(2012/11-0154) [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Levitt, J. S. , Chitnis, S. , & Walker-Batson, D. (2015). The effects of the “SPEAK OUT!” and “LOUD Crowd” voice programs for Parkinson's disease . International Journal of Health Sciences , 3 ( 2 ), 13–19. https://doi.org/10.15640/ijhs.v3n2a3 [ Google Scholar ]
  • Lindblom, B. (1990). Explaining phonetic variation: A sketch of the H&H theory . In Hardcastle W. J. & Marchal A. (Eds.), Speech production and speech modeling (pp. 403–439). Kluwer Academic Publishers. https://doi.org/10.1007/978-94-009-2037-8_16 [ Google Scholar ]
  • Lou, J. S. , Kearns, G. , Oken, B. , Sexton, G. , & Nutt, J. (2001). Exacerbated physical fatigue and mental fatigue in Parkinson's disease . Movement Disorders , 16 ( 2 ), 190–196. https://doi.org/10.1002/mds.1042 [ PubMed ] [ Google Scholar ]
  • Maclean, N. , & Pound, P. (2000). A critical review of the concept of patient motivation in the literature on physical rehabilitation . Social Science & Medicine , 50 ( 4 ), 495–506. https://doi.org/10.1016/S0277-9536(99)00334-2 [ PubMed ] [ Google Scholar ]
  • Marr, J. (1991). The experience of living with Parkinson's disease . The Journal of Neuroscience Nursing , 23 ( 5 ), 325–329. https://doi.org/10.1097/01376517-199110000-00010 [ PubMed ] [ Google Scholar ]
  • Milenkovic, P. (2005). TF32 [Software program] . University of Wisconsin–Madison. [ Google Scholar ]
  • Miller, N. , Noble, E. , Jones, D. , & Burn, D. (2006). Life with communication changes in Parkinson's disease . Age and Ageing , 35 ( 3 ), 235–239. https://doi.org/10.1093/ageing/afj053 [ PubMed ] [ Google Scholar ]
  • Molloy, D. W. (1999). The Standardized Mini-Mental Status Exam (SMMSE) . New Grange Press. [ Google Scholar ]
  • Morel, P. , Ulbrich, P. , & Gail, A. (2017). What makes a reach movement effortful? Physical effort discounting supports common minimization principles in decision making and motor control . PLOS Biology , 15 ( 6 ), Article e2001323. https://doi.org/10.1371/journal.pbio.2001323 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Muñoz-Vigueras, N. , Prados-Román, E. , Valenza, M. C. , Granados-Santiago, M. , Cabrera-Martos, I. , Rodríguez-Torres, J. , & Torres-Sánchez, I. (2020). Speech and language therapy treatment on hypokinetic dysarthria in Parkinson disease: Systematic review and meta-analysis . Clinical Rehabilitation , 35 ( 5 ), 639–655. https://doi.org/10.1177/0269215520976267 [ PubMed ] [ Google Scholar ]
  • Olmstead, A. J. , Lee, J. , & Viswanathan, N. (2020). The role of the speaker, the listener, and their joint contributions during communicative interactions: A tripartite view of intelligibility in individuals with dysarthria . Journal of Speech, Language, and Hearing Research , 63 ( 4 ), 1106–1114. https://doi.org/10.1044/2020_JSLHR-19-00233 [ PubMed ] [ Google Scholar ]
  • Park, S. , Theodoros, D. , Finch, E. , & Cardell, E. (2016). Be Clear: A new intensive speech treatment for adults with nonprogressive dysarthria . American Journal of Speech-Language Pathology , 25 ( 1 ), 97–110. https://doi.org/10.1044/2015_AJSLP-14-0113 [ PubMed ] [ Google Scholar ]
  • Perkell, J. S. , Guenther, F. H. , Lane, H. , Matthies, M. L. , Perrier, P. , Vick, J. , Wilhelms-Tricarico, R. , & Zandipour, M. (2000). A theory of speech motor control and supporting data from speakers with normal hearing and with profound hearing loss . Journal of Phonetics , 28 ( 3 ), 233–272. https://doi.org/10.1006/jpho.2000.0116 [ Google Scholar ]
  • Perkell, J. S. , Zandipour, M. , Matthies, M. L. , & Lane, H. (2002). Economy of effort in different speaking conditions. I. A preliminary study of intersubject differences and modeling issues . The Journal of the Acoustical Society of America , 112 ( 4 ), 1627–1641. https://doi.org/10.1121/1.1506369 [ PubMed ] [ Google Scholar ]
  • Pinto, S. , Chan, A. , Guimarães, I. , Rothe-Neves, R. , & Sadat, J. (2017). A cross-linguistic perspective to the study of dysarthria in Parkinson's disease . Journal of Phonetics , 64 , 156–167. https://doi.org/10.1016/j.wocn.2017.01.009 [ Google Scholar ]
  • Plowman, E. K. (2015). Is there a role for exercise in the management of bulbar dysfunction in amyotrophic lateral sclerosis? Journal of Speech, Language, and Hearing Research , 58 ( 4 ), 1151–1166. https://doi.org/10.1044/2015_JSLHR-S-14-0270 [ PubMed ] [ Google Scholar ]
  • R Development Core Team. (2013). R: A language and environment for statistical computing . R Foundation for Statistical Computing. [ Google Scholar ]
  • Richardson, K. , Huber, J. E. , Kiefer, B. , & Snyder, S. (2022). Perception of physical demand, mental demand, and performance: A comparison of two voice interventions for Parkinson's disease . American Journal of Speech-Language Pathology , 31 ( 5 ), 1963–1978. https://doi.org/10.1044/2022_AJSLP-22-00026 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Richardson, K. , Sussman, J. E. , Stathopoulos, E. T. , & Huber, J. E. (2014). The effect of increased vocal intensity on interarticulator timing in speakers with Parkinson's disease: A preliminary analysis . Journal of Communication Disorders , 52 , 44–64. https://doi.org/10.1016/j.jcomdis.2014.09.004 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Roh, J. L. , Kim, H. S. , & Kim, A. Y. (2006). The effect of acute xerostomia on vocal function . Archives of Otolaryngology—Head & Neck Surgery , 132 ( 5 ), 542–546. https://doi.org/10.1001/archotol.132.5.542 [ PubMed ] [ Google Scholar ]
  • Rosen, K. M. , Folker, J. E. , Murdoch, B. E. , Vogel, A. P. , Cahill, L. M. , Delatycki, M. B. , & Corben, L. A. (2011). Measures of spectral change and their application to habitual, slow, and clear speaking modes . International Journal of Speech-Language Pathology , 13 ( 2 ), 165–173. https://doi.org/10.3109/17549507.2011.529939 [ PubMed ] [ Google Scholar ]
  • Rudner, M. , Lunner, T. , Behrens, T. , Thorén, E. S. , & Rönnberg, J. (2012). Working memory capacity may influence perceived effort during aided speech recognition in noise . Journal of the American Academy of Audiology , 23 ( 8 ), 577–589. https://doi.org/10.3766/jaaa.23.7.7 [ PubMed ] [ Google Scholar ]
  • Sapir, S. , Ramig, L. O. , & Fox, C. M. (2011). Intensive voice treatment in Parkinson's disease: Lee Silverman Voice Treatment . Expert Review of Neurotherapeutics , 11 ( 6 ), 815–830. https://doi.org/10.1586/ern.11.43 [ PubMed ] [ Google Scholar ]
  • Smets, E. M. A. , Garssen, B. , Bonke, B. , & De Haes, J. C. (1995). The Multidimensional Fatigue Inventory (MFI) psychometric qualities of an instrument to assess fatigue . Journal of Psychosomatic Research , 39 ( 3 ), 315–325. https://doi.org/10.1016/0022-3999(94)00125-O [ PubMed ] [ Google Scholar ]
  • Smiljanić, R. (2021). Clear speech perception: Linguistic and cognitive benefits . In Pardo J. F., Nygaard L. C., Remez R. E., & Pisoni D. B. (Eds.), The handbook of speech perception (2nd ed., pp. 177–205). Wiley-Blackwell. https://doi.org/10.1002/9781119184096.ch7 [ Google Scholar ]
  • Smiljanić, R. , & Bradlow, A. R. (2005). Production and perception of clear speech in Croatian and English . The Journal of the Acoustical Society of America , 118 ( 3 ), 1677–1688. https://doi.org/10.1121/1.2000788 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Smiljanić, R. , & Bradlow, A. R. (2009). Speaking and hearing clearly: Talker and listener factors in speaking style changes . Language and Linguistics Compass , 3 ( 1 ), 236–264. https://doi.org/10.1111/j.1749-818X.2008.00112.x [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Solomon, N. P. (2000). Changes in normal speech after fatiguing the tongue . Journal of Speech, Language, and Hearing Research , 43 ( 6 ), 1416–1428. https://doi.org/10.1044/jslhr.4306.1416 [ PubMed ] [ Google Scholar ]
  • Solomon, N. P. , & Robin, D. A. (2005). Perceptions of effort during handgrip and tongue elevation in Parkinson's disease . Parkinsonism & Related Disorders , 11 ( 6 ), 353–361. https://doi.org/10.1016/j.parkreldis.2005.06.004 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sparrow, W. A. (1983). The efficiency of skilled performance . Journal of Motor Behavior , 15 ( 3 ), 237–261. https://doi.org/10.1080/00222895.1983.10735299 [ PubMed ] [ Google Scholar ]
  • Spencer, K. A. , Friedlander, C. , & Brown, K. A. (2020). Predictors of health-related quality of life and communicative participation in individuals with dysarthria from Parkinson's disease . International Journal of Neurodegenerative Disorders , 3 ( 1 ), 1–7. https://doi.org/10.23937/2643-4539/1710014 [ Google Scholar ]
  • Stipancic, K. L. , Palmer, K. M. , Rowe, H. P. , Yunusova, Y. , Berry, J. D. , & Green, J. R. (2021). “You say severe, I say mild”: Toward an empirical classification of dysarthria severity . Journal of Speech, Language, and Hearing Research , 64 ( 12 ), 4718–4735. https://doi.org/10.1044/2021_JSLHR-21-00197 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stipancic, K. L. , & Tjaden, K. (2022). Minimally detectable change of speech intelligibility in speakers with multiple sclerosis and Parkinson's disease . Journal of Speech, Language, and Hearing Research , 65 ( 5 ), 1858–1866. https://doi.org/10.1044/2022_JSLHR-21-00648 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stipancic, K. L. , Tjaden, K. , & Wilding, G. (2016). Comparison of intelligibility measures for adults with Parkinson's disease, adults with multiple sclerosis, and healthy controls . Journal of Speech, Language, and Hearing Research , 59 ( 2 ), 230–238. https://doi.org/10.1044/2015_JSLHR-S-15-0271 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stipancic, K. L. , Yunusova, Y. , Berry, J. D. , & Green, J. R. (2018). Minimally detectable change and minimal clinically important difference of a decline in sentence intelligibility and speaking rate for individuals with amyotrophic lateral sclerosis . Journal of Speech, Language, and Hearing Research , 61 ( 11 ), 2757–2771. https://doi.org/10.1044/2018_JSLHR-S-17-0366 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sussman, J. E. , & Tjaden, K. (2012). Perceptual measures of speech from individuals with Parkinson's disease and multiple sclerosis: Intelligibility and beyond . Journal of Speech, Language, and Hearing Research , 55 ( 4 ), 1208–1219. https://doi.org/10.1044/1092-4388(2011/11-0048)1208 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Taylor, I. M. , Smith, K. , & Hunte, R. (2020). Motivational processes during physical endurance tasks . Scandinavian Journal of Medicine & Science in Sports , 30 ( 9 ), 1769–1776. https://doi.org/10.1111/sms.13739 [ PubMed ] [ Google Scholar ]
  • Tjaden, K. , Sussman, J. E. , & Wilding, G. E. (2014). Impact of clear, loud, and slow speech on scaled intelligibility and speech severity in Parkinson's disease and multiple sclerosis . Journal of Speech, Language, and Hearing Research , 57 ( 3 ), 779–792. https://doi.org/10.1044/2014_JSLHR-S-12-0372 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tsao, Y.-C. , & Weismer, G. (1997). Interspeaker variation in habitual speaking rate: Evidence for a neuromuscular component . Journal of Speech, Language, and Hearing Research , 40 ( 4 ), 858–866. https://doi.org/10.1044/jslhr.4004.858 [ PubMed ] [ Google Scholar ]
  • Tsao, Y.-C. , Weismer, G. , & Iqbal, K. (2006). Interspeaker variation in habitual speaking rate: Additional evidence . Journal of Speech, Language, and Hearing Research , 49 ( 5 ), 1156–1164. https://doi.org/10.1044/1092-4388(2006/083) [ PubMed ] [ Google Scholar ]
  • Turner, G. S. , Tjaden, K. , & Weismer, G. (1995). The influence of speaking rate on vowel space and speech intelligibility for individuals with amyotrophic lateral sclerosis . Journal of Speech and Hearing Research , 38 ( 5 ), 1001–1013. https://doi.org/10.1044/jshr.3805.1001 [ PubMed ] [ Google Scholar ]
  • Uchanski, R. M. (2008). Clear speech . In Pisoni D. B. & Remez R. E. (Eds.), The handbook of speech perception (pp. 207–235). Blackwell Publishing. https://doi.org/10.1002/9780470757024 [ Google Scholar ]
  • van Brenk, F. , Stipancic, K. L. , Kain, A. , & Tjaden, K. (2022). Intelligibility across a reading passage: The effect of dysarthria and cued speaking styles . American Journal of Speech-Language Pathology , 31 ( 1 ), 390–408. https://doi.org/10.1044/2021_AJSLP-21-00151 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Van Engen, K. J. , Chandrasekaran, B. , & Smiljanić, R. (2012). Effects of speech clarity on recognition memory for spoken sentences . PLOS ONE , 7 ( 9 ), Article e43753. https://doi.org/10.1371/journal.pone.0043753 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • van Hooren, M. R. A. , Baijens, L. W. J. , Vos, R. , Pilz, W. , Kuijpers, L. M. F. , Kremer, B. , & Michou, E. (2016). Voice- and swallow-related quality of life in idiopathic Parkinson's disease . The Laryngoscope , 126 ( 2 ), 408–414. https://doi.org/10.1002/lary.25481 [ PubMed ] [ Google Scholar ]
  • Weinstein, B. E. , & Ventry, I. M. (1983). Audiometric correlates of the Hearing Handicap Inventory for the Elderly . Journal of Speech and Hearing Disorders , 48 ( 4 ), 379–384. https://doi.org/10.1044/jshd.4804.379 [ PubMed ] [ Google Scholar ]
  • Whitehill, T. L. , & Wong, C. C.-Y. (2006). Contributing factors to listener effort for dysarthric speech . Journal of Medical Speech-Language Pathology , 14 ( 4 ), 335–341. [ Google Scholar ]
  • Whitfield, J. A. , & Goberman, A. M. (2014). Articulatory-acoustic vowel space: Application to clear speech in individuals with Parkinson's disease . Journal of Communication Disorders , 51 , 19–28. https://doi.org/10.1016/j.jcomdis.2014.06.005 [ PubMed ] [ Google Scholar ]
  • Whitfield, J. A. , Holdosh, S. R. , Kriegel, Z. , Sullivan, L. E. , & Fullenkamp, A. M. (2021). Tracking the costs of clear and loud speech: Interactions between speech motor control and concurrent visuomotor tracking . Journal of Speech, Language, and Hearing Research , 64 ( 6S ), 2182–2195. https://doi.org/10.1044/2020_JSLHR-20-00264 [ PubMed ] [ Google Scholar ]
  • Whyte, J. , Dijkers, M. P. , Hart, T. , Van Stan, J. H. , Packel, A. , Turkstra, L. S. , Zanca, J. M. , Chen, C. , & Ferraro, M. (2019). The importance of voluntary behavior in rehabilitation treatment and outcomes . Archives of Physical Medicine and Rehabilitation , 100 ( 1 ), 156–163. https://doi.org/10.1016/j.apmr.2018.09.111 [ PubMed ] [ Google Scholar ]
  • Wulf, G. , & Lewthwaite, R. (2016). Optimizing performance through intrinsic motivation and attention for learning: The OPTIMAL theory of motor learning . Psychonomic Bulletin & Review , 23 ( 5 ), 1382–1414. https://doi.org/10.3758/s13423-015-0999-9 [ PubMed ] [ Google Scholar ]
  • Yorkston, K. M. (2010). Management of motor speech disorders in children and adults (3rd ed.). Pro-Ed. [ Google Scholar ]
  • Yorkston, K. M. , Beukelman, D. R. , Hakel, M. , & Dorsey, M. (2007). Speech Intelligibility Test [Computer software] . Institute for Rehabilitation Science and Engineering at Madonna Rehabilitation Hospital. [ Google Scholar ]
  • Yorkston, K. M. , Beukelman, D. R. , & Traynor, C. (1984). Computerized assessment of intelligibility of dysarthric speech . C.C. Publications. [ Google Scholar ]
  • Yorkston, K. M. , Hakel, M. , Beukelman, D. R. , & Fager, S. (2007). Evidence for effectiveness of treatment of loudness, rate, or prosody in dysarthria: A systematic review . Journal of Medical Speech-Language Pathology , 15 ( 2 ), xi–xxxvi. [ Google Scholar ]
  • Yorkston, K. M. , Strand, E. A. , & Kennedy, M. R. T. (1996). Comprehensibility of dysarthric speech: Implications for assessment and treatment planning . American Journal of Speech-Language Pathology , 5 ( 1 ), 55–66. https://doi.org/10.1044/1058-0360.0501.55 [ Google Scholar ]
  • Zipf, G. K. (1965). Human behavior and the principle of least effort: An introduction to human ecology . Addison-Wesley. [ Google Scholar ]
  • Type 2 Diabetes
  • Heart Disease
  • Digestive Health
  • Multiple Sclerosis
  • COVID-19 Vaccines
  • Occupational Therapy
  • Healthy Aging
  • Health Insurance
  • Public Health
  • Patient Rights
  • Caregivers & Loved Ones
  • End of Life Concerns
  • Health News
  • Thyroid Test Analyzer
  • Doctor Discussion Guides
  • Hemoglobin A1c Test Analyzer
  • Lipid Test Analyzer
  • Complete Blood Count (CBC) Analyzer
  • What to Buy
  • Editorial Process
  • Meet Our Medical Expert Board

JN.1 COVID Variant Symptoms vs. Allergy Symptoms

Guido Mieth / Getty Images

Key Takeaways

  • JN.1 is the most common COVID-19 variant in the U.S. right now.
  • This variant currently causes 84% of cases of the virus.
  • JN.1 symptoms can be confused with those of seasonal allergies.

The JN.1 COVID variant emerged in the U.S. in November of 2023 and quickly started gaining steam in the country. Now, it’s responsible for nearly 84% of COVID-19 cases in the U.S.—and that dominance is coinciding with peak allergy season.

Data currently suggest that COVID-19 severity indicators, including hospitalizations and deaths, are on the decline right now. But knowing if and how you can differentiate COVID from allergies is important to prevent further spread of the disease. Here’s what infectious disease doctors want you to know.

JN.1 Is an Offshoot of Omicron

The JN.1 COVID variant descended from BA.2.86, a variant that was first sequenced in July 2023, Thomas Russo, MD , professor and chief of infectious diseases at the University at Buffalo in New York, told Verywell. “JN.1 is an Omicron variant,” he said.

BA.2.86 has 20 differences in its spike protein from previous Omicron variants, Russo explained, noting that the spike protein is what SARS-CoV-2, the virus that causes COVID-19, uses to infect you. “JN.1 has an additional mutation on its spike protein,” he said.  

JN.1 “is very, very contagious, but it does not appear to produce more severe disease,” William Schaffner, MD , a professor at the Vanderbilt University School of Medicine, told Verywell.

Symptoms of the JN.1 Variant

The Centers for Disease Control and Prevention (CDC)’s list of COVID-19 symptoms has been consistent for months, despite several variants coming and going. Those symptoms include:

  • Fever or chills
  • Shortness of breath or difficulty breathing
  • Muscle or body aches
  • New loss of taste or smell
  • Sore throat
  • Congestion or runny nose
  • Nausea or vomiting

Schaffner said that there aren’t “striking differences” between the symptoms of previous COVID-19 variants and JN.1, but there are a few small changes.

“Sore throat is more common, as is nausea,” he said. “Diarrhea might be a bit more common with this variant.”

There is also less of a chance of having a loss of taste and smell with JN.1 than past variants, he said.

“The latest variants have been mimicking the common cold,” Russo said. “We’re seeing milder symptoms as a general rule. However, if you’re in a high-risk group, COVID-19 can still cause serious disease and is potentially lethal.”

How to Tell JN.1 Variant Symptoms From Seasonal Allergies

Russo said that it can be tricky to tell symptoms of JN.1 from seasonal allergies, but there are a few hints you may be dealing with one over the other.

“Certainly, seasonal allergies can give you a runny nose and clog your sinuses, but it will not give you a fever,” he said. “Shortness of breath would not be due to seasonal allergies.”

Seasonal allergies are also unlikely to cause a cough or congestion that is deep in your chest, Schaffner said. “You should also not get diarrhea,” he said. “Those are symptoms that are more likely to indicate a viral infection.”

If you have seasonal allergies and experience symptoms around the same time every year—including this year—that may indicate that your current illness is due to allergies, Russo said. “But the best way to differentiate COVID-19 from seasonal allergies is to test for COVID-19,” he added.

Schaffner said that having symptoms of a respiratory virus right now makes it more likely that you have COVID-19 compared to the flu or RSV. “The flu and RSV have pretty much returned to low baseline levels,” he said.

Still, he said you should test yourself for COVID-19 if you’re not sure what illness you have.

What This Means For You

Seasonal allergy symptoms can be confused with those of the JN.1 COVID variant. If you’re unsure what illness you have, doctors recommend testing yourself for COVID-19 and taking next steps from there.

The information in this article is current as of the date listed, which means newer information may be available when you read this. For the most recent updates on COVID-19, visit our  coronavirus news page .

Centers for Disease Control and Prevention. COVID data tracker summary of variant surveillance .

Centers for Disease Control and Prevention. Symptoms of COVID-19 .

By Korin Miller Miller is a health and lifestyle journalist with a master's degree in online journalism. Her work appears in The Washington Post, Prevention, SELF, Women's Health, and more.

Highlights from Day 3 of Trump’s hush money trial

What to know about trump's hush money trial.

  • Former President Donald Trump's hush money trial resumes in New York City for the third day today with jury selection. Twelve jurors have been seated so far, with new additions today including a man who works in investment banking and a security engineer.
  • Tuesday's proceedings in state Judge Juan Merchan's courtroom were marked by fiery exchanges over Trump's behavior and old Facebook posts of prospective jurors.
  • Trump has pleaded not guilty to 34 counts of falsifying business records related to a $130,000 payment made to adult film actor Stormy Daniels at the end of the 2016 election cycle to keep her quiet about her allegation that she and Trump had a sexual encounter. Trump has denied the affair.
  • Catch up with what you missed on Day 2 .

Trump returns to Trump Tower

test reported speech variant 1

Megan Lebowitz

The former president's motorcade has returned to Trump Tower after the third day of the hush money trial.

Meet the 12 jurors at Trump’s hush money trial

test reported speech variant 1

Rebecca Shabad is in Washington, D.C.

All 12 jurors, plus an alternate, were selected this week to serve on the jury after they made it clear to both sides that they could render a fair and impartial verdict.

Prosecutors and the defense team whittled down a pool of nearly 200 people to 12 jurors and an alternate after having grilled them about their personal histories, political views, social media posts and ability to remain impartial despite any opinions they might have about the polarizing former president.

Here's a brief description of each juror.

Read the full story here.

Trump attorney asks who the DA plans to call as first 3 witnesses

test reported speech variant 1

Zoë Richards

Trump attorney Todd Blanche asked whom the district attorney's office plans to call as its first three witnesses. Joshua Steinglass of the DA’s office refused on the basis that Trump has been tweeting about them.

Judge Merchan said he does not fault the DA’s office for its position. Blanche said Trump will not tweet about the witnesses, which Merchan said Blanche cannot promise, and he told him to treat the information as “attorneys’ eyes only.”

Merchan declined to order the DA’s office to name its first three witnesses, and Steinglass did not otherwise agree to do so.

Trump continues criticizing the case after court proceedings end for the day

Trump addressed reporters after court was dismissed for the day. He said that he was supposed to be in states like Georgia, New Hampshire and North Carolina to campaign but that instead "I've been here all day" for an "unfair trial."

Trump held up a stack of news stories and editorials that he said were critical of the case. He continued railing against the trial. "The whole thing is a mess," he said.

Trump did not respond to shouted questions from reporters.

Judge gives instructions to newly sworn-in jurors

Matt Johnson

Judge Merchan gave instructions to the jurors who were sworn in minutes ago. Among them: Do not discuss the case.

The jurors were then escorted out of the courtroom and walked past the defense table, from which Trump stared at them.

Court ends for the day. Dismissal on Monday and Tuesday will be 2 p.m.

test reported speech variant 1

Gary Grumbach

The court has decided that 2 p.m. will be the trial end time next Monday and Tuesday.

Here's the gender breakdown of the 12-person jury

test reported speech variant 1

Ginger Gibson Senior Washington Editor

The jury is seven men and five women.

Jurors are sworn in

The jurors selected today to sit on the panel were sworn in, vowing to hear the case in a "fair and impartial manner."

Trump watched as they raised their right hands for the swearing-in.

Jury selection will continue tomorrow for the six alternates.

Twelve jurors have been selected

The court has now seated 12 jurors.

“We have our jury,” Judge Merchan said when the 12th juror was picked.

The next six jurors selected will serve as alternates.

“I’m hopeful we will finish tomorrow,” the judge added.

Potential juror says she was a Bernie Sanders supporter when posting critically about Trump

test reported speech variant 1

A potential juror has been brought back into the courtroom for questions about her social media posts.

As she read one of her posts to the court, she said she was a Bernie Sanders supporter at the time.

“I was in a disturbed frame of mind during that election cycle," she said, adding that she no longer holds the positions expressed in the post.

Two more jurors seated, bringing the total to seven

Two new jurors have been seated, bringing the total seated back to seven after two were dismissed earlier.

The jurors are a man who works in investment banking and a man who is a security engineer.

Trump attorney questions juror's social media posts about former president

test reported speech variant 1

Alexandra Marquez is based in Washington, D.C.

Susan Necheles, a Trump attorney, is challenging Juror No. 430 for cause.

She alleges that the juror's posts through 2020 were vitriolic and that the juror called Trump a “racist, sexist narcissist” on social media.

Necheles also said the juror said, “Trump is an anathema to everything I was taught about Jesus … and could not be more fundamentally un-Christian.”

Defense lawyer cites book of journalist who is in the courtroom

Trump lawyer Susan Necheles referred to New York Times reporter Maggie Haberman's book "Confidence Man: The Making of Donald Trump and the Breaking of America."

Haberman, who is covering the trial, is in the courtroom as part of the small pool of journalists allowed inside to share information about the jury selection process.

Prospective juror says it was pretty difficult not to have strong feelings about Trump during his presidency

One potential juror said it was pretty difficult not to have strong feelings or conversations about Trump during his presidency.

"There’s so much information about him everywhere. So no matter how you feel, you’re seeing things online," she said. "I mean he was our president, everyone knows who he is.”

One juror says they're a centrist and 'everybody needs a chance'

test reported speech variant 1

Jillian Frankel

One juror who was just questioned during voir dire told Necheles that they are a "centrist."

The juror added, "Everybody needs a chance, regardless of who they are, to be innocent until proven guilty.”

Court takes brief break to discuss strikes

The court has taken a brief break to discuss which jurors each side would like to strike.

Both the prosecution and defense have four remaining preemptory strikes. Both sides could each request that jurors be struck for cause.

Potential juror shares encounter with Trump and ex-wife 'shopping for baby things'

One prospective juror, who says they were born and raised in Brooklyn, described encountering Trump and his ex-wife Marla Maples once while they were "shopping for baby things" at ABC Home, an iconic Manhattan home goods store known for quirky, upscale decor.

Trump and Maples were married in the 1990s and share one daughter, Tiffany Trump.

Prospective juror says she doesn't have 'strong feelings' about Trump

One prospective juror told Trump's lawyer, "His politics aren't always my politics," but said she agrees with him on some policies and disagrees with him on others.

"But as a human being, that's a different topic," she said.

Asked about social media activity, she said, "Politics just seems like a nasty thing to be posting about during a national crisis."

She added, "I just don’t have strong feelings about President Trump at this point...I don’t post about him.”

One juror previously met Trump's lawyer

One of the jurors being questioned by Steinglass says she previously met one of Trump's attorneys.

Asked by Steinglass if this juror could remain impartial despite that, the juror said she had no concerns about her impartiality.

Prosecution refers to 'accomplice liability' to explain case theory

test reported speech variant 1

Laura Jarrett

For the second time in a week, the prosecution has used a notable example of “accomplice liability” in explaining their theory of the case to the prospective jurors.

Steinglass says that Mr. Trump is being held liable just like a husband who hires a hitman to kill his wife would be — even if the husband is in a different city when it happens, he’s still criminally liable.

One juror says she's concerned she knows too much about the case

One prospective jurors who said during the questionnaire that she had read Mark Pomerantz's book and was worried she knows too much about the case.

"I’m worried that I know too much," she said. “And academically, I know I have put it to the side. I’m worried that it’s going to seep in, in some way.”

Pomerantz is a former prosecutor who once oversaw the Manhattan District Attorney Office’s investigation into Trump.

Trump appears skeptical as voir dire begins

Trump watched skeptically as Steinglass asked the jurors whether any of them felt the district attorney would have to prove more because Trump is not like any other defendant.

Trump's body is not turned toward the jury or Steinglass, but his head is. Blanche and Bove are watching Steinglass and the jury more intently.

Trump then scribbled on a piece of paper and handed it to Bove, who shared it with Necheles. She then had a short exchange with Trump.

Judge Merchan says voir dire of prospective jurors will begin

The judge told the group of 18 prospective jurors that previously went through the questionnaire that they will now be questioned by both sides, with the prosecution up first.

Court back in session

Merchan is back on the bench and court is back in session. Attorneys for both sides will now question prospective jurors.

Spotted outside of the courthouse: former GOP Rep. George Santos

George Santos

Former Rep. George Santos, R-N.Y., was spotted outside of the courthouse. He did not answer a question from NBC News about what brought him here today.

Santos was ejected from Congress in December after he was federally charged with crimes like wire fraud and money laundering. He has pleaded not guilty. He is currently running for Congress in New York as an independent.

Court goes on a lunch break

The court has recessed for lunch until 2:15 p.m.

Juror dismissed after tying Trump to Berlusconi

One juror was just dismissed after disclosing that he was born and raised in Italy and then comparing Trump to Silvio Berlusconi, the former prime minister of Italy.

Berlusconi, who died last June, was an infamous womanizer and was convicted of tax fraud in 2013.

Potential juror says he's a few credits short of a college degree

One potential juror said that while he graduated from high school, he is a few credits short of a college degree, "which kills my parents."

A cold courtroom

Blanche, Trump's lawyer, just asked if they could make it warmer in the courtroom, saying, "it’s freezing" in the room.

Merchan agreed, "It’s chilly in here, no question."

Merchan excuses Juror No. 4

After they had a conference with the juror, Merchan announced he's excusing juror No. 4, who had previously been seated and sworn him. His prior arrest was questioned by the DA.

Seated juror 'expressed annoyance' about his personal information becoming public

A seated juror was called for questioning, with prosecutors inquiring about whether or not he was truthful in answering questions about his past criminal history.

Following a conference between the juror and Merchan, the judge said, the juror "expressed annoyance about how much information was out there about him in the public.”

And Merchan sealed the portion of the transcript where he says the juror discussed "highly personal" information.

Trump left the courtroom while decision on Juror 4 being made

Trump exited the courtroom at 11:45 a.m. He returned about eight minutes later.

One prospective juror works in law enforcement

One potential juror said that he has worked in law enforcement for 34 years and, in his spare time, he has season tickets to New York Rangers games and enjoys going to Yankees games.

Dismissed juror has "satirized Mr. Trump, often" online

Another dismissed juror, Mark, spoke to NBC News' Vaughn Hillyard outside the courthouse, telling him that he determined he couldn't be fair and impartial because, "I have satirized Mr. Trump, often, in my artwork."

Mark added, "There’s no way that Blanche — who’s not going to rely on the kindness of strangers — would permit me to be on the jury ... There’s no way that after my online presence ... that they would regard me to be fit to serve."

Mark's online comedy hadn't yet come up in the process when he raised his hand to signal he couldn't be fair and impartial, but he was sure Trump's lawyers would figure it out.

"It would be a waste of their time and, frankly, as a taxpayer, our money —for me to clog up the process," he added. 

Juror 4 has arrived

The person previously seated on the jury has come into the courtroom. He is going to be asked about crimes he or his wife are alleged to have committed, after they were unearthed by the DA's office.

Court takes a brief break

The court has taken a brief break.

One juror has read part of Michael Cohen's book

One of the jurors responding to questions said she has read several pages of "Disloyal," a book by Michael Cohen, Trump's former personal attorney, who is a potential witness in this case.

The juror said she read part of the book for unspecified "business reasons." Earlier in her questionnaire, the juror said she works in publishing, but it's unclear whether the book was directly related to her job.

Prospective juror says while he doesn't have strong beliefs aboutTrump, he does read The New York Times

A prospective juror who was just questioned said that while he doesn't have any strong opinions or firmly held beliefs about Trump, he does "read the news, New York Times and so forth."

The same person said he follows Trump's Truth Social posts, as well as Michael Cohen on X.

Potential jurors say they have read Trump's "The Art of the Deal"

One potential juror who said she subscribes to The New York Times, mainly for the crossword puzzle, said she read Trump's "The Art of the Deal" book decades ago.

The juror also said she has a relative who works for the Justice Department.

Another juror, who said he works in finance, also said he read "The Art of the Deal."

Questionnaire highlights tension points for potential jurors

The potential juror being questioned now by the judge encapsulates how tough it is for some working professionals called for jury duty in Manhattan to say they cannot be fair and impartial. This is a person who is a practicing attorney. 

She appears not to want to say publicly she can’t be fair, notwithstanding some deep sighs we can hear from her. She also clerked for a federal judge and discussed the case with him, so she’s treading carefully.

Dismissed juror: Trump "looked less orange" than I expected

One dismissed juror spoke to MSNBC's Yasmin Vossoughian outside the courthouse following her exit from the case.

"Everyone was shocked, everyone was frozen," said the woman, identified only by her first name, Kat. She recounted the moment she and fellow prospective jurors walked into the room and realized they'd been called for the Trump trial.

“We went into the courtroom and we saw Donald Trump ... I was shocked, I was sitting in the second row, like 6 feet away," she added.

Before showing up for jury duty, “I didn’t really [follow the case], I was too busy," Kat said, but added that she just became a U.S. citizen in August and realized, "I feel the duty, I’m a citizen and I have responsibilities.”

Asked about how Trump looked in the courtroom, Kat said, "He looked less orange" than she was expecting.

She added, “He doesn’t look angry or — I think he looks bored, like he wants this to finish.”

Potential juror said she discussed former Manhattan DA Mark Pomerantz's book with others

test reported speech variant 1

Summer Concepcion

The first potential juror said she had discussed the case at length with co-workers, including a book written by Mark Pomerantz , the former Manhattan district attorney who led the investigation into Trump’s alleged financial crimes . She said she hasn't read any of the books written by Michael Cohen or Trump.

The woman also disclosed that she attended the Women's March after Trump took office.

48 prospective jurors excused after signaling they can't be fair or impartial

After Judge Merchan told the pool of prospective jurors to raise their hand if they can't be fair and impartial, 48 out of 96 were excused.

Trump again closes his eyes while Merchan reads jury instructions aloud

Katherine Doyle

Trump again closed his eyes while Merchan read aloud jury instructions. He didn't open them when his lawyer Emil Bove passed a note to Blanche in front of him.

Merchan is soft-spoken and his voice has a relaxing tone. Trump is seen moving his head back and forth while his eyes remain closed.

Trump yawned as Merchan reached the end of the jury instruction.

Juror issues raise questions about trial timeline

The fact that we now have one juror dismissed already this morning and one potentially on the rocks (for apparently not being forthcoming on the questionnaire) shows the challenges in predicting when a final slate of 12 jurors will be empaneled. 

It also shows how waiting several more days before the opening statements runs the risk that more jurors will drop out as they sleep on the gravity of being involved in this case.

DA's office says Trump has violated judge's gag order seven more times

Prosecutor Chris Conroy handed up a new order in response to Trump's social media posts. The DA alleges that Trump has violated the judge's gag order seven more times and he wants the posts included in the hearing scheduled for Tuesday.

Yesterday, the former president complained about the jury selection process and Conroy said that "most disturbingly" Trump quoted a Fox News host suggesting that "undercover" liberal activists are lying to get onto the jury.

Conroy said the DA's office is still considering options in terms of sanctions prosecutors are seeking.

Merchan raises concerns about "the veracity of Juror #4’s answers"

After discussion about the gag order, Merchan said he had concerns about one of the jurors and how truthfully the person had answered questions.

One of the questions on the juror questionnaire asks if the juror or any of their family members were accused of a crime.

Joshua Steinglass of the DA's office told Merchan that they discovered an article featuring a person with the same name who was arrested in Westchester in the 1990s for tearing down political advertisements.

Merchan implores the press to use 'common sense' when reporting jurors' descriptions

Merchan asked reporters to use "common sense" when describing the jurors' physical descriptions.

"There was really no need to mention that one of the jurors had an Irish accent," he said.

A juror has been excused from duty

Juror 2, the oncology nurse, has been excused from duty. As court started today, Merchan told lawyers on both sides that the juror called and conveyed that after sleeping on it, she had concerns about being fair and impartial.

She had concerns about her identity becoming public and said that friends and family have already inquired about whether she is a juror. The juror added that given these outside influences, she was concerned about her ability to be fair and impartial.

An oncology nurse, a corporate lawyer and a man with "no spare time": Meet the first 7 jury members of Trump’s hush money trial

The first seven people were selected to serve on the jury in Trump’s  hush money trial  in New York on Tuesday after they made it clear to both sides that they could render a fair and impartial verdict.

They were chosen on the second day of the trial after prosecutors and the defense team whittled down  a group of 96 potential jurors . At one point, Merchan  admonished Trump after he observed him  audibly mouthing something  in the direction of one of the jurors, who had been asked about a social media post she made the day Joe Biden was declared the winner of the 2020 election.

“I won’t tolerate that,” Merchan said. “I will not have any jurors intimidated in this courtroom.” Trump’s lawyers ultimately eliminated the woman from the jury pool.

The seven chosen so far were sworn in Tuesday and directed by Merchan to return to court Monday.

Twelve people will be seated on the jury, and each side will select alternates. The trial is expected to last as long as eight weeks.

Read more on the seven jurors selected so far.

Day 3 begins

Merchan has taken the bench — a few minutes early — and started Day 3.

Trump is taking a phone call at the defense table

Trump is using his phone in the courtroom, openly flouting the rules of the courtroom. Blanche just told him to stop and Trump tucked the phone in his pocket while looking annoyed.

Prosecutors seek to ask Trump about civil fraud, E. Jean Carroll cases and more if he testifies in hush money case

test reported speech variant 1

Dareh Gregorian

Prosecutors from the Manhattan district attorney’s office said in a court filing yesterday that they plan to ask Trump about the  costly verdicts  and findings of wrongdoing in his numerous civil cases if the former president decides to  testify in the criminal case  — though the permissibility of that line of questioning remains to be seen.

The prosecutors said they intend to ask Trump about the judgment in New York Attorney General Letitia James’ civil fraud suit against him and his company, as well as a pair of verdicts in lawsuits brought by writer E. Jean Carroll. The judgments in the three cases total  almost $550 million  and include findings that Trump  committed fraud  in the AG’s case and that he is liable for  sexual abuse  and  defamation  in the Carroll case.

District Attorney Alvin Bragg’s office also plans to mention findings by the judge in the civil fraud case that Trump  violated a gag order  and “ testified untruthfully  under oath” during the trial.

Prosecutors said they want to be able to bring up those findings — which  Trump is appealing  — “to impeach the credibility of the defendant” if he takes the witness stand.

Trump said last week he  “absolutely“ plans to testify  but is under no obligation to do so.

Trump lawyers in Florida classified docs case seek more time to meet deadlines in order to "defend him in New York and before this Court"

In a filing today, Trump’s legal team representing him in the classified documents against him in Florida are seeking more time to meet deadlines in order for them to “defend him in New York and before this Court.”

Trump’s lawyers argue that their client and his counsel “cannot prepare — or even discuss — the required filings anywhere but an appropriate SCIF (sensitive compartmented information facility), a virtually impossible task given” the former president and his lawyers Blanche and Emil Bove’s involvement in the hush money trial.

“The special counsel’s office argues President Trump’s constitutional rights are ‘not implicated’ because his counsel has had ‘months to prepare the submissions at issue’ and will ‘only be in trial four days a week in New York,’” Trump’s lawyers wrote in the filing. “This premise is untethered to reality and disregards the substantial motion practice that has occurred before this Court.”

Trump departs Trump Tower

Brittany Kubicko

Trump has left Trump Tower and is headed to the courthouse for Day 3 of his hush money trial.

Donald Trump

Fiery exchanges over Facebook posts and Trump’s behavior mark second day of trial

test reported speech variant 1

Jonathan Allen

The first seven jurors were selected for Trump’s hush money trial Tuesday amid a battle over prospective jurors’ old Facebook posts and calls to “lock him up” and the judge’s warning that the former president should not try to intimidate the panelists who will be deciding his fate.

“I will not have any jurors intimidated in this courtroom. I want to make this crystal clear,” Merchan told Trump and Blanche outside the jurors' presence. Merchan told Blanche his client was “audibly” saying something in the direction of the juror while she was “12 feet away from your client.”

Merchan said that he didn’t know what Trump was saying but that he’d been “muttering” and “gesturing” at the juror, and he directed Blanche to talk to his client about his behavior. Blanche then whispered something into Trump’s ear.

The incident underscores Trump’s penchant for acting up in court and the problems his lawyers might have keeping him in check. He spoke loudly in front of jurors during the E. Jean Carroll defamation trial and at one point stormed out of his civil fraud trial — two trials he appeared at voluntarily. His presence is required in the criminal case, and the trial could last as long as eight weeks.

The current drama came on the second day of jury selection as seven jurors were selected for the case. The jury is anonymous, so their names weren’t used in open court, but panelists include a lawyer, a salesman, an oncology nurse, an IT consultant, a teacher and a software engineer. The seven were sworn in and told to return to court Monday.

Read the full story here

The first jurors have now been chosen for Trump’s criminal hush money trial after a cross-section of Manhattan residents openly revealed their views of the likely GOP nominee. NBC’s Laura Jarrett reports for "TODAY."

On trial off-day, Trump complains about jury selection process for his criminal case

Trump ripped the jury selection process in his historic New York criminal trial yesterday, the day after the first seven jurors were selected out of a pool of almost 100 people.

Posting about the hush money trial on its scheduled off-day, Trump — who has repeatedly accused the judge in the case of being biased against him — suggested incorrectly that he should be entitled to unlimited strikes of potential jurors in his criminal case.

“I thought STRIKES were supposed to be ‘unlimited’ when we were picking our jury? I was then told we only had 10, not nearly enough when we were purposely given the 2nd Worst Venue in the Country,” he wrote on Truth Social before he decried the criminal cases against him as “election interference” and part of a “witch hunt.”

Under New York law, each side does have an unlimited number of strikes “for cause,”   but Merchan, the judge presiding over the case, can decide whether or not that cause is worthy of a strike.

The two sides are also entitled to a limited number of “peremptory strikes” — potential jurors they can dismiss. Because Trump is charged with a Class E felony, which is a lower-level felony, he and prosecutors are entitled to 10 peremptory challenges each. (The number goes up to 20 for defendants facing the highest level of felony charge, Class A.)

While Merchan has dismissed scores of potential jurors who said they could not be impartial or had scheduling conflicts, he has dismissed only two for cause in the two days since jury selection began . One was a person who had written “lock him up” of Trump in a 2017 social media post. Merchan denied some other Trump cause dismissal requests, including one for a woman who had posted on Facebook about celebrating Joe Biden’s 2020 election win.

Trump’s attorney Todd Blanche then used one of his peremptory challenges to remove the woman.

Read the full story

Trump hush money trial resumes with jury selection after day off

Jury selection is set to resume in former President Donald Trump's hush money trial in New York City after a break in action yesterday.

With seven jurors already having been selected from a pool of 96, the schedule for today will focus largely on questioning potential jurors in a second group of the same size to see whether they can be fair and impartial when it comes to Trump. State Judge Juan Merchan has said he hopes to have 12 jurors, as well as alternates, selected by the end of tomorrow.

Prosecutors and lawyers for Trump will have less opportunity to dismiss potential jurors going forward, because both used six of their 10 peremptory challenges Tuesday.

While both sides can make an unlimited number of challenges for cause, it is up to the judge to decide whether to grant those challenges and strike those jurors. Merchan dismissed two jurors for cause Tuesday, one of whom had posted a “lock him up” message about Trump on Facebook, but he denied some other challenges.

The pandemic cost 7 million lives, but talks to prevent a repeat stall

test reported speech variant 1

In late 2021, as the world reeled from the arrival of the highly contagious omicron variant of the coronavirus , representatives of almost 200 countries met — some online, some in-person in Geneva — hoping to forestall a future worldwide outbreak by developing the first-ever global pandemic accord.

The deadline for a deal? May 2024.

The costs of not reaching one? Incalculable, experts say. An unknown future pathogen could have far more devastating consequences than SARS-CoV-2, which cost some 7 million lives and trillions of dollars in economic losses.

But even as negotiators pack in extra hours, the goal of clinching a legally binding pact by next month is far from certain — despite a new draft document being delivered in recent days. The main sticking point involves access to vital information about new threats that may emerge — and to the vaccines and medicines that could contain that threat.

“It’s the most momentous time in global health security since 1948,” when the World Health Organization was established, said Lawrence O. Gostin, director of the WHO Collaborating Center for National and Global Health Law at Georgetown University.

The backdrop to today’s negotiations is starkly different from the years after World War II when countries united around principles guaranteeing universal human rights and protecting public health. The unifying fear of covid has been replaced by worries about repeating the injustices that tainted the response to the pandemic, deepening rifts between the Global North and the Global South.

“The trauma of the covid-19 pandemic has seeped into the negotiations,” said Ellen 't Hoen, a lawyer and public health advocate who specializes in intellectual property policies. Representatives of the WHO’s 194 member countries, she said, are looking backward rather than forward.

The reasons are clear. A paper published in October 2022 in the journal Nature showed that by the end of 2021, nearly 50 percent of the global population had received two doses of coronavirus vaccine but that huge disparities existed between high-income countries, where coverage was close to 75 percent, and many low-income countries, where less than 2 percent of the population had received two doses. At the same time, South Africa, where the omicron variant was identified, felt punished by travel bans instead of being praised for its scientists’ epidemiological acumen and openness.

“We felt like we were beggars when it came to vaccine availability,” South African President Cyril Ramaphosa recalled at a global financial summit in 2023. “We felt like life in the Northern Hemisphere is much more important than life in the Global South.”

The United States has signaled its support for a legally binding agreement, including leveraging its purchasing power to expand access to medicines around the world. But the United States, like many European Union countries, is the object of mistrust because it is the seat of the powerful pharmaceutical industry, which is reluctant to relax control over manufacturing know-how.

The chief point of contention involves pathogen access and benefit sharing. In many ways, the story of the fraught pandemic accord negotiations is the story of Henrietta Lacks — the African American patient whose cancer cells were used in research for years without her family’s knowledge — retold on a global stage. Who gets to use — and profit from — samples and scientific information, which often come from disadvantaged groups?

High-income countries want guarantees that samples and genetic data about any new pathogen will be quickly shared to allow for the development of tests, vaccines and treatments. Developing nations, where pathogens such as AIDS, Ebola and MERS emerged in recent decades, want guarantees of benefits, such as equal access to vaccines and collaboration with local scientists.

Almost 20 years ago, the Indonesian government forced those contrasting priorities to the forefront by refusing to share bird flu samples. WHO member states responded by creating the Pandemic Influenza Preparedness Framework , or PIP, under which key manufacturers agree to supply 10 percent of flu vaccines they make to the WHO for distribution.

No such agreement exists for other pathogens with pandemic potential.

“The PIP Framework provides us with good guidance for what an access and benefits sharing instrument could look like, but there are areas where the pandemic agreement could improve,” said Alexandra L. Phelan, senior scholar at the Johns Hopkins Center for Health Security, who co-authored a piece in the journal Nature in February calling for a “science-for-science mechanism” to ensure vaccine equity in the next pandemic.

A new agreement, Phelan said, could include an obligation to share genetic sequence data and factor in public health risks when determining how medical products are shared during an emergency. Unlike in earlier outbreaks, no need exists today to wait for a pathogen sample to arrive by mail in a test tube; work on vaccines and treatments can begin based on genetic sequencing attached to an email.

Even as negotiators wrestle over those points, the venture is being roiled by misinformation on social media, including hostility toward the WHO and assertions that any international agreement would threaten the sovereignty of nations — claims that WHO Director General Tedros Adhanom Ghebreyesus has condemned as “utterly, completely, categorically false.” The final agreement, Tedros said in early April, won’t give the WHO power to impose lockdowns or mask mandates in individual countries.

Underlying it all is “a lack of trust,” said 't Hoen, who, like Phelan, is one of the outside experts approved by member states to provide input to the negotiations although they do not take part in the closed-door talks. Some describe lingering in the cafeteria, waiting for the opportunity to glean information or offer counsel to country representatives when they emerge in need of refreshment.

“This is a pretty nontransparent process,” said Phelan, with “a lot of grumpy and unhappy people.”

The stymied talks prompted former British prime minister Gordon Brown, who serves as WHO ambassador for global health financing, to write a letter in March to the 194 WHO member states urging them to collaborate for the common good. The letter was signed by many former presidents and prime ministers, along with experts in global health and finance.

But signing on is less politically palatable for current political leaders now that so many people have moved on from the pandemic, choosing to ignore the not-if-but-when warnings that public health officials are airing again today, just as they did before the novel coronavirus was identified more than four years ago in China.

“The global leadership is absent,” said Nina Schwalbe, principal of the global health think tank Spark Street Advisors, another expert approved to provide input to the negotiations.

And in many ways, the coronavirus has left the world more vulnerable, Schwalbe and others argue, amid increased resistance to vaccination and other preventive measures and the weariness of public health officials. In some U.S. states, officials’ powers have been curtailed by legislatures.

Meanwhile, climate change and increased interactions between human and animal populations are increasing the possibility of spillover events spawning zoonotic diseases that are all but impossible to contain given the speed of modern travel.

Efforts to study pathogens present their own risk, with laboratories around the world engaged in medical and military research aimed at increasing the virulence of existing bacteria and viruses through “gain-of-function” research, posing a threat of accidental or deliberate release.

And in March, the National Academies of Sciences, Engineering, and Medicine released a paper outlining a new peril: Advances in gene editing and synthetic biology make it possible to revive pathogens including the virus that causes deadly and disfiguring smallpox — the only human disease declared to have been eradicated following a vaccine campaign by epidemiologists nearly half a century ago.

“New technologies could enable nefarious actors to genetically engineer the smallpox virus from scratch or make it even more lethal,” said Gostin, who chaired the committee that produced the National Academies report. “The potential for a laboratory leak or intentional release of smallpox or other pox viruses is real.”

Nature is also making a show of strength.

Since the beginning of 2023, the Democratic Republic of Congo has reported more than 12,000 cases of mpox resulting in 581 deaths, according to the Centers for Disease Control and Prevention , and there have been more than 700 cases this year in the United States. Bird flu has been identified in dairy cows in several U.S. states, with one dairy worker being treated for symptoms. A new JN.1 strain of the coronavirus is circulating.

When the ninth and supposedly final round of talks on the global pandemic accord closed in late March with no agreement, Tedros declared overtime, setting a date in late April for negotiations to resume. The WHO director general has portrayed the pandemic agreement as an urgent generational opportunity, only the second such global health accord, following the 2003 Framework Convention on Tobacco Control, which used new taxation and labeling and advertising rules to target smoking.

Asked in early April whether a deal could still be struck, Tedros sounded cautious. “I believe it can happen,” he said. In mid-April, the policy nonprofit Health Policy Watch published a new bare-bones draft agreement that is being sent to member states. It maintains support for equity, while leaving key details to be hashed out during the next two years, by which time the leadership of many instrumental countries, including the United States, may have changed. Meetings are set to resume April 29.

Some experts have speculated that the original timeline was too short to unite 194 countries around such a divisive and complex topic, pointing out that many treaties take years to finalize and that this process has been complicated by concurrent negotiations over the International Health Regulations, which aim to prevent the spread of disease. The Biden administration also just announced its own Global Health Security Strategy, with a goal of combating health emergencies by using U.S. leadership to drive investment in prevention and response among partner countries.

But past crises have shown that complex global negotiations can move quickly.

“After Chernobyl, a legally binding treaty was negotiated within six months,” Schwalbe said, referring to the 1986 nuclear power plant disaster. “Covid-19 is a calamity of equal importance.”

test reported speech variant 1

IMAGES

  1. How to Use Reported Speech in English

    test reported speech variant 1

  2. reported speech test: English ESL worksheets pdf & doc

    test reported speech variant 1

  3. ESL Teachers: REPORTED SPEECH

    test reported speech variant 1

  4. Reported Speech: Important Grammar Rules and Examples • 7ESL

    test reported speech variant 1

  5. reported speech english esl worksheets for distance learning and

    test reported speech variant 1

  6. Reported Speech Test interactive activity

    test reported speech variant 1

VIDEO

  1. Class 9 International English Olympiad (IEO)

  2. Reported Speech| Learn English Grammar

  3. Ағылшын тілін үйрену сабақтары. Reported speech

  4. GRAMMAR PART-2 (TENSES, SUBJECT VERB AGREEMENT, REPORTED SPEECH) in 1 Shot (Theory+PYQs)

  5. Reported Speech ഉടായിപ്പിലൂടെ പഠിക്കാം

  6. شرح اول ثانوي الكلام المنقول

COMMENTS

  1. Indirect speech

    What is indirect speech or reported speech? When we tell people what another person said or thought, we often use reported speech or indirect speech. To do that, we need to change verb tenses (present, past, etc.) and pronouns (I, you, my, your, etc.) if the time and speaker are different.For example, present tenses become past, I becomes he or she, and my becomes his or her, etc.

  2. Test 9: Reported speech

    Test 9: Reported speech. Choose the correct answer. I like cake. ... Leo said he like cake. Leo said he'd like cake. Leo said he liked cake. We don't want to go to the concert. ... They said they didn't want to go to the concert. They said didn't want to go to the concert.

  3. Reported Speech Quiz

    Online quiz to test your understanding of English reported speech. This is a free multiple-choice quiz that you can do online or print out. ... EnglishClub: Learn English: Grammar: Reported Speech: Quiz Reported Speech Quiz. You can do this grammar quiz online or print it on paper. It tests what you learned on the Reported Speech pages. 1 ...

  4. Reported Speech Quiz

    Instructions. In this reported speech quiz there are 10 direct speech statements. Turn them into reported speech. Make sure you use correct capitalisation and punctuation and that you include 'that'. For example: Mike said, "I am late". would be Mike said that he was late. John said, "I want to see a film". Tina said, "I am tired".

  5. Reported speech : Quiz 1

    A guide to learning English. Grammar and vocabulary quizzes. Information and advice for learners, teachers and parents.

  6. PDF Unit 12A Grammar: Reported Speech(1

    Reported Speech. Greg: "I am cooking dinner Maya.". Maya: "Greg said he was cooking dinner.". So most often, the reported speech is going to be in the past tense, because the original statement, will now be in the past! *We will learn about reporting verbs in part 2 of this lesson, but for now we will just use said/told.

  7. Reported Speech

    Watch my reported speech video: Here's how it works: We use a 'reporting verb' like 'say' or 'tell'. ( Click here for more about using 'say' and 'tell' .) If this verb is in the present tense, it's easy. We just put 'she says' and then the sentence: Direct speech: I like ice cream. Reported speech: She says (that) she likes ice cream.

  8. Reported Speech multiple-choice questions

    B1 English Grammar Test - Reported Speech multiple-choice questions. Choose the correct answer. 1 Steve said: "I work every day.". a) Steve said that he was working every day. b) Steve said that he had worked every day. c) Steve said that he worked every day. d) Steve said that he would work every day. 2 Rachel: "I'm playing the piano ...

  9. 15 Reported speech test English ESL worksheets pdf & doc

    Reported Speech Test. A test on reported s. 1388 uses. Caroline_97. Reported speech - re. A two-page worksheet. 103 uses. misskattyortega. Reported speech. It is an easy practi. 108 uses. morewk. REPORTED SPEECH. A TEST ABOUT THE REP. 237 uses. maykedz. REPORTED SPEECH QUIC. A MINI TEST TO REVIS. 2862 uses. peonylam. Reported Speech.

  10. BBC Learning English

    Session Grammar. Reported speech. One thing to remember: Move the tense back! 1) Present simple -> past simple "I know you." -> She said she knew him.. 2) Present continuous -> past continuous

  11. More Than Words: Extra-Sylvian Neuroanatomic Networks Support Indirect

    Indirect speech acts—responding "I forgot to wear my watch today" to someone who asked for the time—are ubiquitous in daily conversation, but are understudied in current neurobiological models of language. To comprehend an indirect speech act like this one, listeners must not only decode the lexical-semantic content of the utterance, but also make a pragmatic, bridging inference. This ...

  12. Assessment of Individuals with Primary Progressive Aphasia

    Primary progressive aphasia (PPA) is a disorder marked by a gradual loss of communicative function caused by neurodegenerative disease affecting speech and language networks in the brain 1,2.There are three widely recognized clinical variants of PPA, each with a unique signature of communication deficits and underlying neural changes: the semantic, nonfluent/agrammatic, and logopenic variants.

  13. Please explain me these tests TEST.V-1 1. Choose the right variant of

    Please explain me these tests TEST.V-1 1. Choose the right variant of the indirect speech. Bob asked : " Did Tom come yesterday ? " Bob asked me … a) If Tom had come the day before b) If Tom came yesterday c) If Tom had come yesterday d) Did Tom come yesterday e) Tom didn`t come yesterday 2. What time is it ?

  14. B1 English Grammar Test

    This sentence into Reported Speech becomes: Tom said he … to exotic places. a) travel. b) traveled. c) travels. 17 Susan said, "I am going to the store". This sentence into Reported Speech becomes: Susan said that she … to the store. a) going. b) am going. c) was going. 18 Linda told me, "I owned a flower shop". This sentence into ...

  15. 99 Test Reported speech I variant 1. He said: "I have just received a

    99 Test Reported speech I variant 1. He said: "I have just received a letter from my uncle". 2. "I am going to the theatre tonight," he said to me. 3. Mike said: "I spoke to Mr. Brown this morning. 4. He said to her: "I will do it today if I have time.' 5. I said to them: "I can give you my uncle's address. 6.

  16. Reported Speech- 1 Free MCQ Practice Test with Solutions

    Test: Reported Speech- 1 for Class 9 2024 is part of Class 9 preparation. The Test: Reported Speech- 1 questions and answers have been prepared according to the Class 9 exam syllabus.The Test: Reported Speech- 1 MCQs are made for Class 9 2024 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests for Test: Reported Speech- 1 below.

  17. Ответы Mail.ru: тест по английскому языку

    Высший разум. TEST: Reported speech Variant 1 1. Fill in the right verb 1 Mom asked what time we ___ back the day before. A had come 2 The teacher ordered the pupils ___. A not to shout 3 Chris said his parents ___ then. B were sleeping 4 He said he ___ to the swimming pool the next day. C was going 5 Lucy says she ___ well.

  18. Developmental profile of speech-language and communicative functions in

    Introduction. Rett syndrome (RTT, MIM 312750) is a severe progressive neurodevelopmental disorder affecting primarily females. It is associated with stereotyped hand movements (hand wringing, washing like movements) together with limited purposeful hand use, profound intellectual disability, severe communicative and speech-language deficiencies as well as autistic like behaviour [1-4].

  19. Clear Speech Variants: An Investigation of Intelligibility and Speaker

    Method: Fourteen speakers with PD and 14 neurologically healthy speakers participated. Each speaker was recorded reading 18 sentences from the Speech Intelligibility Test in their habitual speaking style and for three clear speech variants: clear (SC; given instructions to speak clearly), hearing impaired (HI; given instructions to speak with someone with a hearing impairment), and ...

  20. Тестові завдання з граматики англійської мови для учнів 8- х класів

    Test 2. Reported Speech. Choose the correct variant 1. I said ,"The room looks clean and tidy. a) I said (that) the room would look clean and tidy. b) I said (that) the room looked clean and tidy. c) I said (that) the room was looked clean and tidy. 2. Bobby said," I will not go to school on Sunday.

  21. Variant 1

    Variant 1. Тести ... Test . Focus 3. Grammar. Third Conditional. Ukraine, definite article -the-NATURE AND THE ENVIRONMENT . A tour around Ukraine. Схожі тест. Dictionary Corner. The Article. Mass media. Radio and television in Britain. Radio and TV programmes . Information Magic (Test) Reported Speech.

  22. Reported Speech

    Reported Speech. Тести Англійська мова 11 клас Тест. Радулова Т. Додано: 1 березня 2023. Предмет: Англійська мова, 11 клас. Тест виконано: 1251 раз. Робота з учнями. Результати учнів на сторінці « Результати ...

  23. JN.1 COVID Variant Symptoms vs. Allergy Symptoms

    JN.1 Is an Offshoot of Omicron . The JN.1 COVID variant descended from BA.2.86, a variant that was first sequenced in July 2023, Thomas Russo, MD, professor and chief of infectious diseases at the University at Buffalo in New York, told Verywell. "JN.1 is an Omicron variant," he said.

  24. Reported speech 1

    Тест з предмету англійська мова для 8—9 класів на 11 запитань. «She said, "I am reading."...». Автор(ка): Корж Наталія.

  25. Trump trial updates: Two seated jurors removed from Trump's hush money

    The former president's motorcade has returned to Trump Tower after the third day of the hush money trial. Rebecca Shabadis in Washington, D.C. All 12 jurors, plus an alternate, were selected this ...

  26. The pandemic cost 7 million lives, but talks to prevent a repeat stall

    Since the beginning of 2023, the Democratic Republic of Congo has reported more than 12,000 cases of mpox resulting in 581 deaths, according to the Centers for Disease Control and Prevention, and ...

  27. House passes key procedural vote on foreign aid bills, setting up final

    The House voted Friday in a bipartisan manner to advance a key foreign aid package, a significant step in sending aid to Ukraine and Israel and setting up a final vote as soon as Saturday.

  28. PDF Military and Security Developments Involving the

    During his October 2022 speech at the opening ceremony of the 20th Party Congress, Xi ... the CCP reported it had "enhanced" the PRC's security on all fronts and "withstood political, economic, ideological, and natural risks, challenges, and trials." ... and use simulations, war games, and exercises to test, evaluate, and improve ...