• Special Education Needs
  • Specialists & Therapists
  • School Children
  • University Students
  • Professionals
  • Speech Difficulties
  • Discover Forbrain
  • Try our Demo
  • Speech Therapy for Kids
  • Autism and Learning
  • Dyslexia in Children
  • ADHD and Learning
  • Starter Guide

speech impediment adhd

What Is Speech Therapy For ADHD & Why Is It Important?

speech impediment adhd

ADHD is a neurodevelopmental disorder that’s often diagnosed in childhood. This means it’s a condition that’s caused by differences in the way the brain develops. 

Every child with ADHD is unique. An evaluation from an expert is an important step in getting them the help they need and understanding their strengths and struggles. 

Speech therapists are trained to provide skilled assessments of children with ADHD. These professionals are also known as speech and language pathologists, or SLPs. 

Speech and language therapists can help you identify your child’s issues early on in their development. Studies show identifying children with ADHD in preschool can reduce its long-term consequences.

Early intervention seeks to target children aged three to five with signs and symptoms of ADHD. Most children with ADHD begin showing symptoms by this time in their development. If these symptoms can be recognized and diagnosed,  therapy can begin. The earlier your child can begin therapy, the better their outcomes tend to be. 

Thankfully, there is support to help your child with ADHD address and overcome issues with communication. This article will explore the many benefits of speech therapy for kids with ADHD.

ADHD & Speech Problems: Understanding the Relationship

At its core, ADHD is an issue with executive functions . These skills include our ability to self-regulate, plan, and make decisions. They also cover our judgment and insight. ADHD affects the development of these skills. 

Kids with ADHD often experience challenges with planning and organization skills. They also struggle with emotional regulation and paying attention. They may experience difficulties with time management, following directions, listening, and remembering things. Hyperactivity is another hallmark of ADHD. 

It’s easy to see how executive functioning issues can impact a child’s educational experience, and their development on the whole. 

Executive functions are also tied in with our communication skills in important ways.

  • Our ability to pay attention influences our skills with listening and following directions. It also affects our skills with structuring a story and answering questions.
  • Our planning skills help us organize our thoughts into speech and stories. They also help us make lists and understand what listeners need when we’re giving information.
  • Our inhibition skills keep us from saying the first thing that pops into our head. They also help us understand what language is appropriate or not for certain people and environments.
  • Our emotional regulation helps us participate fully in structured environments, like school. It also helps us manage big feelings and refrain from acting out on them. Self-regulation skills also underpin our ability to monitor our speech and language, and correct any errors. 

Because children with ADHD struggle with these skills, it can affect their academics and social engagement. It can also negatively influence their self esteem. Understanding how ADHD affects learning is crucial in addressing these challenges. Many research studies have highlighted the impact of ADHD on speech and language skill development. It’s believed that between 17 and 38% of children with ADHD also have speech and language disorders.

The Role of Speech Therapy in ADHD Treatmen t

Speech therapy can address speech and language deficits in children with ADHD. Your speech therapist will create goals that are specific to your child’s needs. They’ll also take into account your child’s likes and preferences, to keep them motivated in therapy. Here’s how speech therapists help kids with ADHD. 

Improve their Attention

Games and activities that require sustained focus to complete are typically used. SLPs can also help kids with ADHD stay focused during structured activities. They can use their name, make eye contact, and encourage them to restate what they’ve heard. Tasks can also be provided in written and verbal form, to offer kids with ADHD more than one way to learn.

Improve Organization Skills

For school-aged kids, speech therapists can provide support in organization aids and strategies. This might include graphic organizers, visual schedules, and color coding of schoolwork. 

Modify Activities to Promote Learning

SLPs can incorporate movement into therapy for kids with ADHD, as these kids tend to love staying active. 

Games like Simon Says, beanbag toss, Red Light Green Light, and hopscotch can all be co-opted to target speech and language goals. 

SLPs can create games that involve jumping, racing, and sensory stimulation. Goals for following directions, recall, regulation, and vocabulary can be embedded into these tasks. 

Overcome Educational Obstacles

SLPs partner with educators to implement modifications to the classroom to support kids with ADHD. This can include seating considerations, extra time to complete assignments, and checking in.

It can also involve creating routines, providing outlines, or breaking down tasks into smaller parts. Some kids with ADHD may benefit from a note taking buddy, or assistive devices to help them organize work.

The goal of speech therapy for ADHD is to meet your child where they are, and help them learn how they learn best. 

That’s why SLPs create personalized treatment plans, designed to meet the needs of each child they treat. These plans are especially beneficial for highly sensitive children who struggle with ADHD. These plans contain long and short term goals, based on a child’s areas of need. Children with ADHD benefit from these targeted interventions, to offer the learning support they need to succeed.

forbrain-demo-page-subscription-image

Integrating Speech Therapy Into Multidisciplinary ADHD Treatment

With multifaceted conditions like ADHD, it’s important to take a holistic approach to treatment. Speech therapy can complement other ADHD treatments, to comprehensively address ADHD in children. 

Speech therapists, occupational, and behavioral therapists work together to treat kids with ADHD. 

ADHD treatment can also include medications and educational support services. 

Healthcare professionals from different disciplines work together to treat ADHD. Because ADHD impacts how the brain works, it makes sense to treat it comprehensively. This way, all areas of need can be addressed, and treatment efforts can be coordinated. 

For example, a speech and occupational therapist may work together on emotional regulation goals for kids with ADHD. This is within the scope of practice for both of these therapists, but they can approach it through the lens of their specific expertise. 

In this scenario, a speech therapist might target goals around impulsivity and social language . An OT may focus on physical regulation, or building movement breaks into daily routines for kids with ADHD.  Because ADHD is a complex condition , treating it effectively requires a collaborative approach.

Benefits of Speech Therapy for Children with ADHD

In addition to improving speech and language, speech therapy has many additional benefits for children with ADHD. 

The benefits of speech therapy for kids with ADHD also include:

  • Improved attention and problem solving skills 
  • Improved self-regulation skills 
  • Decision-making
  • Making judgements
  • Following social rules
  • Making educated guesses
  • Maintaining attention

Without treatment, ADHD is associated with reduced academic performance and success . Speech therapists help kids with ADHD to excel in school and in life. They provide holistic, individualized treatments that support personal learning needs. Speech and language therapy is linked to improved educational outcomes in children with ADHD. 

SLPs create modifications and accommodations to a child’s learning environments. In this way, SLPs go beyond treatment to address all aspects of a child’s educational experience.

Practical Tips for Speech Therapy for ADHD at Home

Parents and caregivers of children with ADHD can be a boon to therapy efforts at home. Learning how to treat an ADHD child at home can significantly complement professional therapy and contribute to a more supportive environment. It’s always wise to discuss specific ways you can support your child with their speech therapist. They’ll be happy to provide you with the adhd study tips , training and resources you need to support your child’s progress at home.

The structure of a collaboration between families and speech therapists depends on your child’s goals and what works best for them. 

Research supports that parental involvement enhances outcomes and maximizes your child’s progress. 

You can help your child with ADHD to reach their full potential and meet their speech therapy goals. 

Here are some practical suggestions for strategies you can use at home:

  • Incorporating visual aids and picture schedules into your child’s home routine
  • Supporting their time management skills with clocks, calendars, and other time tracking devices
  • Using a traffic light system to help them monitor and regulate their emotions 
  • Incorporating healthy movement into their home life. This can include yoga, dance breaks, tai chi, walking or any type of movement your child enjoys
  • Modeling appropriate word usage and sentence structure
  • Helping them to recognize when they’re feeling unfocused or overstimulated. Supporting them in using calming and centering strategies when this occurs
  • Reading their school assignments aloud to promote understanding
  • Reviewing work and home expectations with them, to be sure they’re clear 
  • Breaking down chores or other household tasks into smaller, more manageable chunks 
  • Clearly laying out the steps to complete household tasks 
  • Offering multimodal supports to their learning (i.e. a combination of visual, verbal, and written cues)

Be sure to communicate with your child’s speech therapist for home therapy help. They can offer you specific speech therapy ADHD activities for kids to use at home.

In some cases, your SLP will be able to work with you in your home, to offer you and your child guidance in this setting. 

Using Forbrain in Speech Therapy for ADHD

Forbrain is an auditory stimulation headset. It can be used to enhance auditory processing and speech production. It can also be used to enhance information recall and attention skills. 

Forbrain can be used by speech therapists with those who need enhanced auditory feedback and stimulation. In some cases, Forbrain can support speech therapy for kids with ADHD. 

It can support focus and processing in children with ADHD who struggle with these skills. It can also enhance auditory feedback loops, and promote information retention.  

Speech therapists who are focused on building these skills may trial and use Forbrain as a complementary tool in their therapy plans. 

One study suggests that children with ADHD often have deficits in the automatic processing of information. The same study posited that kids with ADHD use a higher cognitive effort to decode auditory information. This increased effort might impact their ability to complete tasks involving auditory components. 

Another study looked at the impact of auditory distractions on the arithmetic performance of kids with and without ADHD. They found evidence to support that some types of auditory stimulation improved the math performance of kids with ADHD. 

In some children with ADHD, Forbrain may be appropriate to include in speech therapy to help address and compensate for these issues. 

ADHD is a wide-ranging condition. To treat it, speech therapists can draw from many different tools, strategies, and modalities. The specific approaches used will depend on each individual child being served. 

Final Words

Many children with ADHD have issues with speech and language skills. Speech therapy has many benefits for children with ADHD. It’s an important part of a holistic treatment approach for ADHD. 

Speech therapy interventions show positive results with children with ADHD. Benefits include:

  • Improved attention
  • Improved social skills
  • Enhanced organizational skills 
  • Improved language skills
  • Reduced impulsivity 
  • Improved self-monitoring
  • Improved planning skills
  • Improved decision-making skills 

Speech therapists can also create modifications and special ADHD accommodations to ensure a child can thrive in the classroom. Speech therapists work closely with educators and parents, to make sure ADHD treatment is targeted and holistic. They also collaborate with other therapists. 

If your child has known or suspected ADHD, it’s important to speak with their pediatrician or teacher to discuss your concerns. Healthcare providers and educators can connect you with the support you need to help your child thrive.

Speech therapy is an important part of your child’s comprehensive ADHD treatment. Your speech therapist will empower your child’s strengths and help them overcome any challenges they face. They’ll also be there to support and guide you. You can work together to help your child with ADHD to reach their fullest potential.

Abikoff, H., Courtney, M. E., Szeibel, P. J., & Koplewicz, H. S. (1996). The effects of auditory stimulation on the arithmetic performance of children with ADHD and nondisabled children. Journal of learning disabilities, 29(3), 238–246. https://doi.org/10.1177/002221949602900302

Aldakroury, Wael. (2018). Speech and Language Disorders in ADHD. Abnormal and Behavioural Psychology. 04. 10.4172/2472-0496.1000134. 

Damico, S. K., & Armstrong, M. B. (1996). Intervention strategies for students with ADHD: creating a holistic approach. Seminars in speech and language, 17(1), 21–35. https://doi.org/10.1055/s-2008-1064086

Diamond A. (2013). Executive functions. Annual review of psychology, 64, 135–168. https://doi.org/10.1146/annurev-psych-113011-143750

Fabio, R. A., Castriciano, C., & Rondanini, A. (2015). ADHD: Auditory and Visual Stimuli in Automatic and Controlled Processes. Journal of attention disorders, 19(9), 771–778. https://doi.org/10.1177/1087054712459562

Halperin, J. M., Bédard, A. C., & Curchack-Lichtin, J. T. (2012). Preventive interventions for ADHD: a neurodevelopmental perspective. Neurotherapeutics : the journal of the American Society for Experimental NeuroTherapeutics, 9(3), 531–541. https://doi.org/10.1007/s13311-012-0123-z

Harpin, V., Mazzone, L., Raynaud, J. P., Kahle, J., & Hodgkins, P. (2016). Long-Term Outcomes of ADHD: A Systematic Review of Self-Esteem and Social Function. Journal of Attention Disorders, 20(4), 295–305. https://doi.org/10.1177/1087054713486516

Heyer J. L. (1995). The responsibilities of speech-language pathologists toward children with ADHD. Seminars in speech and language, 16(4), 275–288. https://doi.org/10.1055/s-2008-1064127

Klatte, I. S., Lyons, R., Davies, K., Harding, S., Marshall, J., McKean, C., & Roulstone, S. (2020). Collaboration between parents and SLTs produces optimal outcomes for children attending speech and language therapy: Gathering the evidence. International journal of language & communication disorders, 55(4), 618–628. https://doi.org/10.1111/1460-6984.12538

Loe, I. M., & Feldman, H. M. (2007). Academic and educational outcomes of children with ADHD. Journal of pediatric psychology, 32(6), 643–654. https://doi.org/10.1093/jpepsy/jsl054

Mathers, M. E. (2006). Aspects of Language in Children With ADHD: Applying Functional Analyses to Explore Language Use. Journal of Attention Disorders, 9(3), 523–533. https://doi.org/10.1177/1087054705282437

Roberts, M. Y., & Kaiser, A. P. (2011). The effectiveness of parent-implemented language interventions: a meta-analysis. American journal of speech-language pathology, 20(3), 180–199. https://doi.org/10.1044/1058-0360(2011/10-0055)

Sciberras, E., Mueller, K. L., Efron, D., Bisset, M., Anderson, V., Schilpzand, E. J., Jongeling, B., & Nicholson, J. M. (2014). Language problems in children with ADHD: a community-based study. Pediatrics, 133(5), 793–800. https://doi.org/10.1542/peds.2013-3355

Sira, C.S. and Mateer, C.A. (2014). Executive function.

Encyclopedia Neurol. Sci. (2014), pp. 239-242, 10.1016/B978-0-12-385157-4.01147-7

Crystal Bray

speech impediment adhd

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Types of Speech Impediments

Sanjana is a health writer and editor. Her work spans various health-related topics, including mental health, fitness, nutrition, and wellness.

speech impediment adhd

Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital.

speech impediment adhd

Phynart Studio / Getty Images

Articulation Errors

Ankyloglossia, treating speech disorders.

A speech impediment, also known as a speech disorder , is a condition that can affect a person’s ability to form sounds and words, making their speech difficult to understand.

Speech disorders generally become evident in early childhood, as children start speaking and learning language. While many children initially have trouble with certain sounds and words, most are able to speak easily by the time they are five years old. However, some speech disorders persist. Approximately 5% of children aged three to 17 in the United States experience speech disorders.

There are many different types of speech impediments, including:

  • Articulation errors

This article explores the causes, symptoms, and treatment of the different types of speech disorders.

Speech impediments that break the flow of speech are known as disfluencies. Stuttering is the most common form of disfluency, however there are other types as well.

Symptoms and Characteristics of Disfluencies

These are some of the characteristics of disfluencies:

  • Repeating certain phrases, words, or sounds after the age of 4 (For example: “O…orange,” “I like…like orange juice,” “I want…I want orange juice”)
  • Adding in extra sounds or words into sentences (For example: “We…uh…went to buy…um…orange juice”)
  • Elongating words (For example: Saying “orange joooose” instead of "orange juice")
  • Replacing words (For example: “What…Where is the orange juice?”)
  • Hesitating while speaking (For example: A long pause while thinking)
  • Pausing mid-speech (For example: Stopping abruptly mid-speech, due to lack of airflow, causing no sounds to come out, leading to a tense pause)

In addition, someone with disfluencies may also experience the following symptoms while speaking:

  • Vocal tension and strain
  • Head jerking
  • Eye blinking
  • Lip trembling

Causes of Disfluencies

People with disfluencies tend to have neurological differences in areas of the brain that control language processing and coordinate speech, which may be caused by:

  • Genetic factors
  • Trauma or infection to the brain
  • Environmental stressors that cause anxiety or emotional distress
  • Neurodevelopmental conditions like attention-deficit hyperactivity disorder (ADHD)

Articulation disorders occur when a person has trouble placing their tongue in the correct position to form certain speech sounds. Lisping is the most common type of articulation disorder.

Symptoms and Characteristics of Articulation Errors

These are some of the characteristics of articulation disorders:

  • Substituting one sound for another . People typically have trouble with ‘r’ and ‘l’ sounds. (For example: Being unable to say “rabbit” and saying “wabbit” instead)
  • Lisping , which refers specifically to difficulty with ‘s’ and ‘z’ sounds. (For example: Saying “thugar” instead of “sugar” or producing a whistling sound while trying to pronounce these letters)
  • Omitting sounds (For example: Saying “coo” instead of “school”)
  • Adding sounds (For example: Saying “pinanio” instead of “piano”)
  • Making other speech errors that can make it difficult to decipher what the person is saying. For instance, only family members may be able to understand what they’re trying to say.

Causes of Articulation Errors

Articulation errors may be caused by:

  • Genetic factors, as it can run in families
  • Hearing loss , as mishearing sounds can affect the person’s ability to reproduce the sound
  • Changes in the bones or muscles that are needed for speech, including a cleft palate (a hole in the roof of the mouth) and tooth problems
  • Damage to the nerves or parts of the brain that coordinate speech, caused by conditions such as cerebral palsy , for instance

Ankyloglossia, also known as tongue-tie, is a condition where the person’s tongue is attached to the bottom of their mouth. This can restrict the tongue’s movement and make it hard for the person to move their tongue.

Symptoms and Characteristics of Ankyloglossia

Ankyloglossia is characterized by difficulty pronouncing ‘d,’ ‘n,’ ‘s,’ ‘t,’ ‘th,’ and ‘z’ sounds that require the person’s tongue to touch the roof of their mouth or their upper teeth, as their tongue may not be able to reach there.

Apart from speech impediments, people with ankyloglossia may also experience other symptoms as a result of their tongue-tie. These symptoms include:

  • Difficulty breastfeeding in newborns
  • Trouble swallowing
  • Limited ability to move the tongue from side to side or stick it out
  • Difficulty with activities like playing wind instruments, licking ice cream, or kissing
  • Mouth breathing

Causes of Ankyloglossia

Ankyloglossia is a congenital condition, which means it is present from birth. A tissue known as the lingual frenulum attaches the tongue to the base of the mouth. People with ankyloglossia have a shorter lingual frenulum, or it is attached further along their tongue than most people’s.

Dysarthria is a condition where people slur their words because they cannot control the muscles that are required for speech, due to brain, nerve, or organ damage.

Symptoms and Characteristics of Dysarthria

Dysarthria is characterized by:

  • Slurred, choppy, or robotic speech
  • Rapid, slow, or soft speech
  • Breathy, hoarse, or nasal voice

Additionally, someone with dysarthria may also have other symptoms such as difficulty swallowing and inability to move their tongue, lips, or jaw easily.

Causes of Dysarthria

Dysarthria is caused by paralysis or weakness of the speech muscles. The causes of the weakness can vary depending on the type of dysarthria the person has:

  • Central dysarthria is caused by brain damage. It may be the result of neuromuscular diseases, such as cerebral palsy, Huntington’s disease, multiple sclerosis, muscular dystrophy, Huntington’s disease, Parkinson’s disease, or Lou Gehrig’s disease. Central dysarthria may also be caused by injuries or illnesses that damage the brain, such as dementia, stroke, brain tumor, or traumatic brain injury .
  • Peripheral dysarthria is caused by damage to the organs involved in speech. It may be caused by congenital structural problems, trauma to the mouth or face, or surgery to the tongue, mouth, head, neck, or voice box.

Apraxia, also known as dyspraxia, verbal apraxia, or apraxia of speech, is a neurological condition that can cause a person to have trouble moving the muscles they need to create sounds or words. The person’s brain knows what they want to say, but is unable to plan and sequence the words accordingly.

Symptoms and Characteristics of Apraxia

These are some of the characteristics of apraxia:

  • Distorting sounds: The person may have trouble pronouncing certain sounds, particularly vowels, because they may be unable to move their tongue or jaw in the manner required to produce the right sound. Longer or more complex words may be especially harder to manage.
  • Being inconsistent in their speech: For instance, the person may be able to pronounce a word correctly once, but may not be able to repeat it. Or, they may pronounce it correctly today and differently on another day.
  • Grasping for words: The person may appear to be searching for the right word or sound, or attempt the pronunciation several times before getting it right.
  • Making errors with the rhythm or tone of speech: The person may struggle with using tone and inflection to communicate meaning. For instance, they may not stress any of the words in a sentence, have trouble going from one syllable in a word to another, or pause at an inappropriate part of a sentence.

Causes of Apraxia

Apraxia occurs when nerve pathways in the brain are interrupted, which can make it difficult for the brain to send messages to the organs involved in speaking. The causes of these neurological disturbances can vary depending on the type of apraxia the person has:

  • Childhood apraxia of speech (CAS): This condition is present from birth and is often hereditary. A person may be more likely to have it if a biological relative has a learning disability or communication disorder.
  • Acquired apraxia of speech (AOS): This condition can occur in adults, due to brain damage as a result of a tumor, head injury , stroke, or other illness that affects the parts of the brain involved in speech.

If you have a speech impediment, or suspect your child might have one, it can be helpful to visit your healthcare provider. Your primary care physician can refer you to a speech-language pathologist, who can evaluate speech, diagnose speech disorders, and recommend treatment options.

The diagnostic process may involve a physical examination as well as psychological, neurological, or hearing tests, in order to confirm the diagnosis and rule out other causes.

Treatment for speech disorders often involves speech therapy, which can help you learn how to move your muscles and position your tongue correctly in order to create specific sounds. It can be quite effective in improving your speech.

Children often grow out of milder speech disorders; however, special education and speech therapy can help with more serious ones.

For ankyloglossia, or tongue-tie, a minor surgery known as a frenectomy can help detach the tongue from the bottom of the mouth.

A Word From Verywell

A speech impediment can make it difficult to pronounce certain sounds, speak clearly, or communicate fluently. 

Living with a speech disorder can be frustrating because people may cut you off while you’re speaking, try to finish your sentences, or treat you differently. It can be helpful to talk to your healthcare providers about how to cope with these situations.

You may also benefit from joining a support group, where you can connect with others living with speech disorders.

National Library of Medicine. Speech disorders . Medline Plus.

Centers for Disease Control and Prevention. Language and speech disorders .

Cincinnati Children's Hospital. Stuttering .

National Institute on Deafness and Other Communication Disorders. Quick statistics about voice, speech, and language .

Cleveland Clinic. Speech impediment .

Lee H, Sim H, Lee E, Choi D. Disfluency characteristics of children with attention-deficit/hyperactivity disorder symptoms . J Commun Disord . 2017;65:54-64. doi:10.1016/j.jcomdis.2016.12.001

Nemours Foundation. Speech problems .

Penn Medicine. Speech and language disorders .

Cleveland Clinic. Tongue-tie .

University of Rochester Medical Center. Ankyloglossia .

Cleveland Clinic. Dysarthria .

National Institute on Deafness and Other Communication Disorders. Apraxia of speech .

Cleveland Clinic. Childhood apraxia of speech .

Stanford Children’s Hospital. Speech sound disorders in children .

Abbastabar H, Alizadeh A, Darparesh M, Mohseni S, Roozbeh N. Spatial distribution and the prevalence of speech disorders in the provinces of Iran . J Med Life . 2015;8(Spec Iss 2):99-104.

By Sanjana Gupta Sanjana is a health writer and editor. Her work spans various health-related topics, including mental health, fitness, nutrition, and wellness.

Frequently Asked Questions About ADHD

Frequently Asked Questions About ADHD | District Speech Therapy Services Speech Language Pathologist Therapist Clinic Washington DC

ADHD is a disorder that isn’t always easy to understand.

Maybe you or your child already has an ADHD diagnosis.

Or maybe you feel like you’ve always struggled with organization, time management, or inattentiveness, and wonder if ADHD is to blame.

These symptoms are all common in people with ADHD diagnoses.

But there’s much more to this disorder than what might typically come to mind when we think of ADHD.

Read on to find out answers for some of the most commonly asked questions about ADHD, and how an ADHD speech therapist can help you overcome some of the challenges of this condition.

What Is ADHD?

Attention deficit hyperactivity disorder, or ADHD, is a neurodevelopmental disorder.

Some of the signs of ADHD include:

  • Difficulty focusing
  • Inattentiveness
  • Making frequent mistakes
  • Forgetting to complete tasks
  • Forgetting about appointments
  • Being easily distracted
  • Difficulty with self control
  • Poor executive function
  • Poor impulse control skills
  • Frequent fidgeting
  • Talking over others

Symptoms begin in childhood, which is why pediatric speech therapists  are trained in ADHD treatment.

The majority of ADHD cases persist into adulthood though.

You may have had your symptoms overlooked in childhood, only to receive a diagnosis as an adult after being reassessed.

It’s less common, but adult speech therapists can also help with ADHD.

If you have ADHD, you may be at a higher risk for other speech or language disorders as well.

What Causes ADHD?

The exact cause of ADHD remains unknown, but there are a number of factors that may contribute to its development.

These factors include:

  • Environmental influences
  • Genetic factors
  • Prenatal risks
  • Physical differences in the brain

Traditionally, gender was also considered to be a contributing factor to ADHD.

However, new research suggests that the disorder affects a much larger number of girls and women than previously believed.

In the past, diagnosis focused on symptoms more common in young boys,

There is a strong genetic link associated with ADHD, with the disorder often running in families.

However, there doesn’t seem to be single cause of ADHD.

Ongoing research aims to better understand the various different potential causes.

How Common Is ADHD?

ADHD is considered one of the most common childhood neurodevelopmental disorders .

Studies suggest that ADHD affects around 3 to 5 percent of children, with some putting the numbers as high as 11 percent.

It’s estimated that between 30 to 65 percent of children with ADHD will continue to experience symptoms into adulthood.

The prevalence of ADHD varies between populations and countries.

If you have ADHD, the persistence of symptoms can have a significant impact on your daily life.

This can affect your ability to focus and complete tasks, as well as your relationships and overall well being.

What is ADHD? | District Speech Therapy Services Speech Language Pathologist Therapist Clinic Washington DC

What’s The Difference Between ADHD And ADD?

We used to distinguish between ADD and ADHD.

But today, they’re considered the same disorder.

There are currently three recognized ADHD subtypes.

What we used to call ADD is now called ADHD – predominantly inattentive.

What we used to call ADHD is now called ADHD – predominantly hyperactive impulsive type.

But there’s also ADHD combined type, a combination of symptoms of the previous two types

Many professionals and the general population still use the terms ADD and ADHD separately.

This can cause confusion if you are not familiar with the three subtypes.

At District Speech and Language Therapy, we use the term ADHD.

Is There A Link Between ADHD And Obesity?

Potentially, yes.

Adults and children with ADHD do appear to face difficulties in maintaining a healthy weight when compared to those without the disorder.

Research suggests that adults diagnosed with ADHD are 1.81 times more likely to be obese and 1.58 times more likely to be overweight than their non ADHD peers.

This might be related to the belief that ADHD brains lack dopamine.

Dopamine is a hormone that consumption of carbs and sugar can increase.

Studies do indicate that children receiving medication as part of their ADHD treatment plan are less likely to be overweight than those who did not take medication as part of their ADHD treatment.

Is There A Link Between ADHD And Smoking?

It’s possible, yes.

Adolescents diagnosed with ADHD are more susceptible to smoking from a very young age, according to recent studies.

Research also suggests that smoking rates are higher among ADHD teens than their peers, and adults with the condition also find it more challenging to quit.

Young people with ADHD tend to be twice as likely to develop nicotine addiction as those without it.

Does ADHD Affect Adults As Well As Children?

While ADHD is often associated with childhood, it can also be a lifelong condition that requires ongoing management and support.

Studies show that some children will “grow out of” their ADHD symptoms.

But others will not.

An estimated 10 million adults are currently diagnosed with ADHD.

This is why it’s so important to bring your child in for a speech therapy evaluation as soon as possible.

Early diagnosis and intervention of your child’s ADHD will help them to better manage their condition as adults.

Adults with ADHD may also experience anxiety, depression, poor stress management , and substance abuse issues, particularly during early adulthood.

If you are an adult with ADHD, you may find these challenges can extend to your work and personal life.

This can lead to inconsistent performance at work or difficulty managing daily responsibilities.

Relationship problems and feelings of frustration, guilt, or blame are also common among adults with ADHD.

It is important to note, though, that you are not alone in these struggles, and that people with ADHD can still thrive with the right tools.

How Does ADHD Affect Speech?

The effects of ADHD on speech and language skills can vary widely between individuals.

If you have ADHD, you may be more likely to experience issues with articulation (the ability to produce certain speech sounds accurately ).

You may also have issues with the fluency of your speech , like cluttering or stuttering .

In fact, in some cases, ADHD is actually identified through these types of differences.

With ADHD, you may also struggle to organize your thoughts while speaking, resulting in the use of filler words or the repetition of sounds and words.

Unfortunately, this can cause confusion and frustration for both you and your listeners.

How Can Speech Therapy For ADHD Help?

If you or your child struggle with ADHD, a speech therapist can help.

Speech therapy can show you some new strategies and techniques that will help you to better follow instructions, plan and organize, complete tasks, and stay focused.

Scheduling an evaluation with a speech therapist can be a crucial step after an ADHD diagnosis.

Your speech therapist will work closely with you to create a personalized treatment plan.

This can include a range of exercises and interventions tailored to your unique strengths, challenges, and needs.

By improving your speech and language abilities, as well as developing better listening and conversational skills, you can boost your overall communication capabilities and self confidence.

A speech therapist is a vital resource for anyone looking to effectively manage the challenges associated with ADHD.

Book Your Appointment With District Speech Today

An ADHD diagnosis doesn’t have to be scary.

If you or your child has been diagnosed with ADHD, a speech therapist can help address the challenges surrounding speech and language as it relates to the condition.

Our Washington DC speech pathologists can help you by providing the right tools to allow your ADHD brain to thrive when it comes to challenges with speech and language.

Book your appointment with District Speech today.

District Speech and Language Therapy specializes in speech therapy, physical therapy, and occupational therapy solutions, for both children and adults, in the Washington D.C and the Arlington Virginia areas.

  • Type 2 Diabetes
  • Heart Disease
  • Digestive Health
  • Multiple Sclerosis
  • COVID-19 Vaccines
  • Occupational Therapy
  • Healthy Aging
  • Health Insurance
  • Public Health
  • Patient Rights
  • Caregivers & Loved Ones
  • End of Life Concerns
  • Health News
  • Thyroid Test Analyzer
  • Doctor Discussion Guides
  • Hemoglobin A1c Test Analyzer
  • Lipid Test Analyzer
  • Complete Blood Count (CBC) Analyzer
  • What to Buy
  • Editorial Process
  • Meet Our Medical Expert Board

Overcoming Speech Impediment: Symptoms to Treatment

There are many causes and solutions for impaired speech

  • Types and Symptoms
  • Speech Therapy
  • Building Confidence

Speech impediments are conditions that can cause a variety of symptoms, such as an inability to understand language or speak with a stable sense of tone, speed, or fluidity. There are many different types of speech impediments, and they can begin during childhood or develop during adulthood.

Common causes include physical trauma, neurological disorders, or anxiety. If you or your child is experiencing signs of a speech impediment, you need to know that these conditions can be diagnosed and treated with professional speech therapy.

This article will discuss what you can do if you are concerned about a speech impediment and what you can expect during your diagnostic process and therapy.

FG Trade / Getty Images

Types and Symptoms of Speech Impediment

People can have speech problems due to developmental conditions that begin to show symptoms during early childhood or as a result of conditions that may occur during adulthood. 

The main classifications of speech impairment are aphasia (difficulty understanding or producing the correct words or phrases) or dysarthria (difficulty enunciating words).

Often, speech problems can be part of neurological or neurodevelopmental disorders that also cause other symptoms, such as multiple sclerosis (MS) or autism spectrum disorder .

There are several different symptoms of speech impediments, and you may experience one or more.

Can Symptoms Worsen?

Most speech disorders cause persistent symptoms and can temporarily get worse when you are tired, anxious, or sick.

Symptoms of dysarthria can include:

  • Slurred speech
  • Slow speech
  • Choppy speech
  • Hesitant speech
  • Inability to control the volume of your speech
  • Shaking or tremulous speech pattern
  • Inability to pronounce certain sounds

Symptoms of aphasia may involve:

  • Speech apraxia (difficulty coordinating speech)
  • Difficulty understanding the meaning of what other people are saying
  • Inability to use the correct words
  • Inability to repeat words or phases
  • Speech that has an irregular rhythm

You can have one or more of these speech patterns as part of your speech impediment, and their combination and frequency will help determine the type and cause of your speech problem.

Causes of Speech Impediment

The conditions that cause speech impediments can include developmental problems that are present from birth, neurological diseases such as Parkinson’s disease , or sudden neurological events, such as a stroke .

Some people can also experience temporary speech impairment due to anxiety, intoxication, medication side effects, postictal state (the time immediately after a seizure), or a change of consciousness.

Speech Impairment in Children

Children can have speech disorders associated with neurodevelopmental problems, which can interfere with speech development. Some childhood neurological or neurodevelopmental disorders may cause a regression (backsliding) of speech skills.

Common causes of childhood speech impediments include:

  • Autism spectrum disorder : A neurodevelopmental disorder that affects social and interactive development
  • Cerebral palsy :  A congenital (from birth) disorder that affects learning and control of physical movement
  • Hearing loss : Can affect the way children hear and imitate speech
  • Rett syndrome : A genetic neurodevelopmental condition that causes regression of physical and social skills beginning during the early school-age years.
  • Adrenoleukodystrophy : A genetic disorder that causes a decline in motor and cognitive skills beginning during early childhood
  • Childhood metabolic disorders : A group of conditions that affects the way children break down nutrients, often resulting in toxic damage to organs
  • Brain tumor : A growth that may damage areas of the brain, including those that control speech or language
  • Encephalitis : Brain inflammation or infection that may affect the way regions in the brain function
  • Hydrocephalus : Excess fluid within the skull, which may develop after brain surgery and can cause brain damage

Do Childhood Speech Disorders Persist?

Speech disorders during childhood can have persistent effects throughout life. Therapy can often help improve speech skills.

Speech Impairment in Adulthood

Adult speech disorders develop due to conditions that damage the speech areas of the brain.

Common causes of adult speech impairment include:

  • Head trauma 
  • Nerve injury
  • Throat tumor
  • Stroke 
  • Parkinson’s disease 
  • Essential tremor
  • Brain tumor
  • Brain infection

Additionally, people may develop changes in speech with advancing age, even without a specific neurological cause. This can happen due to presbyphonia , which is a change in the volume and control of speech due to declining hormone levels and reduced elasticity and movement of the vocal cords.

Do Speech Disorders Resolve on Their Own?

Children and adults who have persistent speech disorders are unlikely to experience spontaneous improvement without therapy and should seek professional attention.

Steps to Treating Speech Impediment 

If you or your child has a speech impediment, your healthcare providers will work to diagnose the type of speech impediment as well as the underlying condition that caused it. Defining the cause and type of speech impediment will help determine your prognosis and treatment plan.

Sometimes the cause is known before symptoms begin, as is the case with trauma or MS. Impaired speech may first be a symptom of a condition, such as a stroke that causes aphasia as the primary symptom.

The diagnosis will include a comprehensive medical history, physical examination, and a thorough evaluation of speech and language. Diagnostic testing is directed by the medical history and clinical evaluation.

Diagnostic testing may include:

  • Brain imaging , such as brain computerized tomography (CT) or magnetic residence imaging (MRI), if there’s concern about a disease process in the brain
  • Swallowing evaluation if there’s concern about dysfunction of the muscles in the throat
  • Electromyography (EMG) and nerve conduction studies (aka nerve conduction velocity, or NCV) if there’s concern about nerve and muscle damage
  • Blood tests, which can help in diagnosing inflammatory disorders or infections

Your diagnostic tests will help pinpoint the cause of your speech problem. Your treatment will include specific therapy to help improve your speech, as well as medication or other interventions to treat the underlying disorder.

For example, if you are diagnosed with MS, you would likely receive disease-modifying therapy to help prevent MS progression. And if you are diagnosed with a brain tumor, you may need surgery, chemotherapy, or radiation to treat the tumor.

Therapy to Address Speech Impediment

Therapy for speech impairment is interactive and directed by a specialist who is experienced in treating speech problems . Sometimes, children receive speech therapy as part of a specialized learning program at school.

The duration and frequency of your speech therapy program depend on the underlying cause of your impediment, your improvement, and approval from your health insurance.

If you or your child has a serious speech problem, you may qualify for speech therapy. Working with your therapist can help you build confidence, particularly as you begin to see improvement.

Exercises during speech therapy may include:

  • Pronouncing individual sounds, such as la la la or da da da
  • Practicing pronunciation of words that you have trouble pronouncing
  • Adjusting the rate or volume of your speech
  • Mouth exercises
  • Practicing language skills by naming objects or repeating what the therapist is saying

These therapies are meant to help achieve more fluent and understandable speech as well as an increased comfort level with speech and language.

Building Confidence With Speech Problems 

Some types of speech impairment might not qualify for therapy. If you have speech difficulties due to anxiety or a social phobia or if you don’t have access to therapy, you might benefit from activities that can help you practice your speech. 

You might consider one or more of the following for you or your child:

  • Joining a local theater group
  • Volunteering in a school or community activity that involves interaction with the public
  • Signing up for a class that requires a significant amount of class participation
  • Joining a support group for people who have problems with speech

Activities that you do on your own to improve your confidence with speaking can be most beneficial when you are in a non-judgmental and safe space.

Many different types of speech problems can affect children and adults. Some of these are congenital (present from birth), while others are acquired due to health conditions, medication side effects, substances, or mood and anxiety disorders. Because there are so many different types of speech problems, seeking a medical diagnosis so you can get the right therapy for your specific disorder is crucial.

Centers for Disease Control and Prevention. Language and speech disorders in children .

Han C, Tang J, Tang B, et al. The effectiveness and safety of noninvasive brain stimulation technology combined with speech training on aphasia after stroke: a systematic review and meta-analysis . Medicine (Baltimore). 2024;103(2):e36880. doi:10.1097/MD.0000000000036880

National Institute on Deafness and Other Communication Disorders. Quick statistics about voice, speech, language .

Mackey J, McCulloch H, Scheiner G, et al. Speech pathologists' perspectives on the use of augmentative and alternative communication devices with people with acquired brain injury and reflections from lived experience . Brain Impair. 2023;24(2):168-184. doi:10.1017/BrImp.2023.9

Allison KM, Doherty KM. Relation of speech-language profile and communication modality to participation of children with cerebral palsy . Am J Speech Lang Pathol . 2024:1-11. doi:10.1044/2023_AJSLP-23-00267

Saccente-Kennedy B, Gillies F, Desjardins M, et al. A systematic review of speech-language pathology interventions for presbyphonia using the rehabilitation treatment specification system . J Voice. 2024:S0892-1997(23)00396-X. doi:10.1016/j.jvoice.2023.12.010

By Heidi Moawad, MD Heidi Moawad is a neurologist and expert in the field of brain health and neurological disorders. Dr. Moawad regularly writes and edits health and career content for medical books and publications.  

Great Speech

A Complete Guide on the Link Between ADHD and Stuttering

Attention Deficit Hyperactivity Disorder (ADHD) affects both children and adults throughout the US and around the world. But when the doctor hands you this label, what makes up the ingredients of the diagnosis?

Most people understand the basics like excitability and difficulty focusing. But, many do not know about ADHD and stuttering.

Understanding ADHD and Stuttering

Is there a link between the two? Research over the years makes a strong case to suggest it.

One speech study  revealed that 50% of the participants who stuttered also had ADHD. To better understand this link, let’s take a closer look at stuttering and ADHD.

What Is Stutter?

Stutter refers to a type of speech impediment where the flow of communication gets disrupted. This results in broken speech. If you or a loved one are experiencing communication challenges due to stuttering, there is help available. Click here to schedule a free introductory call with us here at Great Speech. This way, you will know you are addressing your fluency in the most practical way possible.

With a stutter, you will hear:

  • Abnormal stoppages (silence)
  • Repetitions (re-re-re-peti-tions)
  • Prolongations (prooooooolongations)

This can disrupt social life and make the person very self-conscious. It is one of the many ADHD struggles.

ADHD Overview

This neurodevelopmental disorder presents with three main characteristics, hindered attention span, hyperactivity, and impulsive behavior. People who suffer from it may find it difficult to follow directions, ignore distractions, complete tasks, or think prior to acting,

They often fidget and talk excessively without listening. At times, they may exhibit signs of aggression. These ADHD struggles impair them at home, in school, and in other social settings.

The three sub classifications include ADHD predominantly inattentive type, ADHD predominantly hyperactive-impulsive type, and a combined type. How do these affect the brain?

Stuttering and the ADHD Brain

Though doctors typically diagnose ADHD based on symptoms, you can physically see signs of it in the brain. This is the focus area of the disorder.

The ADHD brain  contains smaller structures in the frontal lobe. Some of these areas include the amygdala, thalamus, and hippocampus, and contribute to socialization, impulse control, concentration, and emotional regulation.

Aside from these physical differences, researchers have also discovered functional disturbances in the  Broca’s area  of the frontal lobe in participants with ADHD. This might cause speech issues and poor articulation seen in people with ADHD.

Research indicates  that a lack of blood flow to the Broca’s area causes people to stutter. Somehow, these abnormal brainwaves connect to this lack of blood flow affecting ADHD social skills.

How to Cope

Moving through life with a stutter, especially when also dealing with all the other effects of ADHD can be challenging. For young children, it can disrupt their ability to learn and make friends.

Adults may also face serious problems with normal socialization. It can affect their relationships and working life.

Fortunately, there are ways to prevent stuttering.  Specialized programs  designed for preventing a stutter significantly improve learning, socialization, and overall quality of life.

You can also try some of the following tips on your own in conjunction with the program!

Breathing Exercises

Stress will only increase your stutter and worsen your other symptoms of ADHD. Learning to control your breath will help you calm yourself, slow down, and focus. It can also help regulate your emotions.

For one good exercise, sit or lie down comfortably. Start breathing in the air slowly through your nose at an even pace until your lungs feel full.

At this point, pause for the count of three. Take one large inhale to top it off and then hold for the count of 5.

Slowly exhale from your lips with them only partway open. When you feel like you let out all of the air, pause for 3 seconds.

Blow out one more time with force and then pause for 5 seconds. Repeat.

This will also help you practice the following procedures, another struggle for those who suffer from ADHD.

Speak Slowly

When you speak quickly, the tongue can get tied. As you speak, the brain signals can get crossed easily.

Make a conscious effort to speak slowly. Practice this at home.

Read your favorite books out loud and make purposeful pauses when you feel the stutter coming on. Over time, you will notice a big difference in your speech.

Avoid Bad Words

This does not necessarily mean to avoid swearing. Just watch what you say.

Whenever possible, write down the words that brought on your stutter. If you notice a pattern with any of them, avoid those words altogether in social situations.

But, at home, practice saying these words slowly. This may help.

If certain words persistently make you stutter even with practice, then look up synonyms in the thesaurus. Raising this awareness for yourself can also help you regulate some of the other symptoms of ADHD.

For the Parents

Watching your child struggle with everyday activities and normal speech can feel overwhelmingly heartbreaking. But, you are not completely powerless.

Children especially benefit in  specialized therapy for kids  when dealing with stuttering and the other effects of ADHD. Also, work with them at home on the suggestions listed above.

To help minimize their frustrations, try out the following tips.

Negative reactions to your child’s behavior or speech will only impede their progress. This includes anger, frustration, and even sadness.

Of course, it is normal to feel these things, but try your best to express them in private.

Or, calmly talk with them about why something they did made you feel that way. You can also benefit from some of the breathing exercises.

Attentively Listen

Your child may feel a lot at once and not know what to do with it all. This can create an overwhelming sense of anger and frustration.

Listen to them, even when they say things in a way you do not like. Explain a better way to express it later.

Listening will also reduce frustrations with their stutter. Needing to constantly ask them to repeat themselves will only make them feel frustrated and embarrassed.

This means staying patient. It might take them a long time to say something.

This time will only increase if you get antsy or start doing something else as they speak. Give them the time they need whenever possible.

Reach out for Help

ADHD and stuttering can significantly affect life, whether you suffer from it yourself or see your child impacted. Know that you are not alone.

Great Speech’s team of speech pathologists can offer speech therapy services for a wide range of people, including those related to ADHD and stuttering.

Whatever your needs are, Great Speech has got you covered. Click here to schedule a free introductory call to get matched with one of our specialized therapists, and begin your program to gain more confident communication!

online speech therapy contact us button

You might also like

a man with an accent suffering voice recognition frustration

  • Enroll & Pay
  • Media Interview Tips
  • KU Communicator Resources
  • Find a KU Faculty Expert
  • Search 'Experts at KU'
  • When Experts Attack! podcast
  • Hometown News

KU speech-language-hearing scholars perform critical screenings, evaluations for Belize community

Group of people in matching blue T-shirts

Fri, 03/29/2024

Kate DeJarnette

LAWRENCE — During the University of Kansas spring break, Kate DeJarnette, speech-language pathologist and clinical assistant professor, and Krysta Green, audiologist and clinical associate professor, took 14 graduate students to San Pedro, Belize, for a clinically based study abroad program.

The primary purpose of the study abroad program was to identify needs for ongoing speech-language-hearing support in schools in San Pedro and to develop collaborative partnerships with teachers and parents.

During the one-week program, seven speech-language pathology (SLP) master’s students, one speech-language pathology doctoral student and seven audiology doctoral students traveled to local schools to conduct screenings and assessments.

Team audiology conducted more than 100 pediatric and adult hearing screenings. They also fit and provided hearing aids to children and adults in need and engaged in parent and teacher education sessions about hearing loss.

Team SLP conducted 85 pediatric speech/language screenings and evaluations, which yielded identification and referral of a dozen children with probable autism spectrum disorder and attention-deficit/hyperactivity disorder (ADHD), three children with suspected genetic disorders or syndromes, and many children with speech, language and communication needs. 

Additionally, team SLP offered several coaching sessions to provide parents with education and strategies to support their children’s speech and language development at home. These parent coaching sessions offered students and faculty to deepen their knowledge of the influence of Belizean cultural norms on clinical work and client-centered care.

“I loved immersing myself in the Belizean culture and exploring their education system,” said Carolyn Russell, second-year MA-SLP student. “Screening the diverse pediatric clientele contributed to my personal and professional growth because of the unique needs that each student presented.”

Hanna Kate Hartshorn, second-year MA-SLP student, also appreciated learning more about the Belizean culture and school systems outside of the United States.

“This experience afforded me a unique opportunity to not only take my clinical skills outside of the classroom, but overseas,” Hartshorn said.

In addition to their clinical work, students and faculty participated in a day of island exploration, which included a boat excursion, snorkeling the barrier reef and swimming with nurse sharks and sting rays.

Media Contacts

Department of Speech-Language-Hearing

[email protected]

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Speech Processing Difficulties in Attention Deficit Hyperactivity Disorder

Rina blomberg.

1 Disability Research Division, Institute for Behavioral Science and Learning, Linköping University, Linköping, Sweden

Henrik Danielsson

Mary rudner, göran b. w. söderlund.

2 Faculty of Teacher Education Arts and Sports, Western Norway University of Applied Sciences, Sogndal, Norway

Jerker Rönnberg

Associated data.

The datasets generated for this study are available on request to the corresponding author.

The large body of research that forms the ease of language understanding (ELU) model emphasizes the important contribution of cognitive processes when listening to speech in adverse conditions; however, speech-in-noise (SIN) processing is yet to be thoroughly tested in populations with cognitive deficits. The purpose of the current study was to contribute to the field in this regard by assessing SIN performance in a sample of adolescents with attention deficit hyperactivity disorder (ADHD) and comparing results with age-matched controls. This population was chosen because core symptoms of ADHD include developmental deficits in cognitive control and working memory capacity and because these top-down processes are thought to reach maturity during adolescence in individuals with typical development. The study utilized natural language sentence materials under experimental conditions that manipulated the dependency on cognitive mechanisms in varying degrees. In addition, participants were tested on cognitive capacity measures of complex working memory-span, selective attention, and lexical access. Primary findings were in support of the ELU-model. Age was shown to significantly covary with SIN performance, and after controlling for age, ADHD participants demonstrated greater difficulty than controls with the experimental manipulations. In addition, overall SIN performance was strongly predicted by individual differences in cognitive capacity. Taken together, the results highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech in typically noisy listening environments. Furthermore, the consistently poorer performance observed in the ADHD group suggests that auditory processing tasks designed to tax attention and working memory capacity may prove to be beneficial clinical instruments when diagnosing ADHD.

Introduction

Children generally have greater difficulties than adults listening to speech in adverse conditions. Maturation of the auditory system over the first decade is undoubtedly associated with age-related improvement in speech understanding in the presence of noise; however, given that both linguistic and cognitive abilities develop simultaneously with auditory abilities, it is unlikely that maturation of the auditory system alone can account for the widely observed performance differences in children that extend well into adolescence (for review, see Litovsky, 2015 ). The ease of language understanding (ELU) model ( Rönnberg et al., 2013 , 2019 ), which underpins the main theoretical perspective in the current research, provides an overarching account of the role that auditory, linguistic, and cognitive mechanisms play in relation to speech understanding. The model is built upon of large body of research, which demonstrates that effective speech-understanding in noise (SIN) requires complex interactions between both bottom-up and top-down processing (for review, see Stenfelt and Rönnberg, 2009 ).

Bottom-up processing proceeds automatically and encompasses the auditory system’s ability to parse and decode the phonetic content of speech and to transiently match/compare that content in working memory with pre-existing lexical and semantic representations in long-term memory. If the phonetic content is clearly discernable from the noise and easily matched to pre-existing linguistic representations, then the brain can implicitly comprehend the speech in a rapid, unimpeded fashion. However, if this transient matching process renders in too much error, for example, when noise grossly degrades the speech or when linguistic knowledge is deficient, then top-down processes must resolve the decoding task ( Rönnberg et al., 2010 ). Top-down processing recruits cognitive control mechanisms such as attention, inhibition, and recall of pre-existing knowledge about the language and available contextual cues, in order to explicitly discern the speech content from the noise in working memory. The extent in which a child has developed the capacity to utilize and integrate both bottom-up and top-down processing greatly determines how well they cope in adverse listening conditions ( McCreery et al., 2017 ). Accordingly, the ELU model emphasizes the functional importance of working memory capacity and cognitive control mechanisms when bottom-up processing is undermined. Indeed, a widely replicated finding in the literature is that measures of working memory capacity, cognitive load, attention, and inhibition correlate with individual differences in SIN ( Rönnberg et al., 2013 ). Despite this extensive support for the functional role of working memory capacity and cognitive control mechanisms, the predictions of the ELU-model are yet to be thoroughly tested in populations with deficits in top-down processing. The purpose of the current study, therefore, was to contribute to the field in this regard.

Attention deficit hyperactive disorder (ADHD) is a neurocognitive condition in which hallmark symptoms manifest as developmental deficits in cognitive control ( Mueller et al., 2017 ; Rubia, 2018 ). Because patients generally have difficulties regulating attention, inhibition, and maintaining information in working memory (for review, see Pievsky and McGrath, 2018 ), ADHD presents a prime case for which to study the ELU models’ top-down component. The current study tested predictions of the ELU model in a sample of Swedish adolescents (11–18 years). The primary goal was to test the general hypothesis that SIN should be more difficult for adolescents with ADHD than their age-matched counterparts due to a compromised cognitive control system and inefficient working memory capacity ( Pievsky and McGrath, 2018 ; Rubia, 2018 ). Secondary aims assessed competing hypotheses regarding the effects certain types of noise have on ADHD.

The experiment was designed to examine the ELU model’s top-down component by utilizing conditions that hampered bottom-up processing and increased the dependency on cognitive control mechanisms and working memory capacity in varying degrees. To this end, participant’s SIN abilities for two types of signal quality were assessed under three different masking conditions using age-appropriate sentence materials from the Swedish hearing-in-noise task (HINT-C; Hällgren et al., 2006 ; Hjertman, 2011 ). In addition, participants were tested on cognitive measures of complex working memory-span, selective attention, and lexical access. We hypothesized ( H 1 ) that ADHD participants would demonstrate inferior performance to their age-matched controls on all cognitive measures due to developmental deficits in this domain ( Takács et al., 2014 ; Pievsky and McGrath, 2018 ). Furthermore, in line with the ELU model, it was hypothesized ( H 2 ) that ADHD participants would require on average higher signal-to-noise ratios (SNRs) than controls for efficient SIN because of the increased processing demand background noise places upon top-down processes. Additionally, we expected ( H 3 ) individual differences in the cognitive measures to predict overall listening performance in noise.

The signal-quality conditions comprised distortion-free clear (CLR) speech and 12-channel noise-vocoded (NV) speech. NV speech is an acoustic distortion that limits the temporal fine structure and spectral detail of speech but preserves the temporal envelope and is highly intelligible in quiet. Importantly, the effect of the distortion involves greater reliance upon top-down processes than CLR speech to understand in the presence of noise ( Rosemann et al., 2017 ), so we predicted ( H 4 ) participants would require higher SNRs to understand NV speech than CLR speech. For noise comparisons, participants’ speech recognition was evaluated under fluctuating (amplitude-modulated) speech-shaped noise (SSN), two-talker babble (2BAB), and stationary white noise (WN) because these three types of maskers have been shown to place differential demands on top-down processes ( Rönnberg et al., 2010 ).

Multi-talker babble places high demands on cognitive control and working memory processes – particularly when the babble contains only a few speakers (≤4) and is perceptually similar to the speech signal ( Rosen et al., 2013 ). Moreover, multi-talker masking affects age groups differently depending upon the predictability of the speech signal, which has been associated with developmental differences in the top-down capacity to inhibit attention to irrelevant speech, and utilizes pre-existing knowledge to infer the content of the babble-masked signal in working memory ( Buss et al., 2016 ). Because HINT sentences provide sufficient contextual support to facilitate prediction of final words (e.g., Grandma eats porridge every day ), we hypothesized ( H 5 ) that participants’ age would covary with SIN performance and ( H 6 ) that participants would generally require higher SNRs in conditions where the masker was perceptually similar to the speech signal. In the specific case of CLR signal-quality conditions, 2BAB was perceptually more similar to the speech signal than the two energetic noise conditions (i.e., WN and SSN). Hence, we expected higher SNRs for CLR speech in babble than energetic noise. Because listening in the amplitude dips of fluctuating noise generally requires more cognitive effort than stationary noise ( Rönnberg et al., 2010 ), we also expected listening in WN to result in the lowest SNRs across CLR-speech comparisons. In the case of NV speech, the signal distortion likened a harsh, robotic whisper, which made it perceptually more similar to the energetic noise conditions than the audibly distinct, non-distorted 2BAB. It was therefore hypothesized that listening to the NV speech in energetic noise would result in higher SNRs than in 2BAB, and in particular, the fluctuating SSN should yield the highest SNRs due to the increased demand on top-down processes. The pressing question, however, was how adolescents with ADHD would perform under these specific manipulations compared to their age-matched counterparts. We assessed three competing outcomes based upon previous reports in the ADHD literature.

One potential outcome (O1) was that all three maskers would negatively impact ADHD participants’ top-down processing such that the ADHD group would require higher SNRs than controls in all experimental conditions. A similar finding was reported by Geffner et al. (1996) in ADHD children (6–12 years) that tested SIN in three types of maskers: stationary WN, cafeteria noise, and a single talker. Both ADHD and controls demonstrated excellent speech-recognition skills in quiet, however, in noise, the ADHD group was inferior to controls across all masking conditions. Pillsbury et al. (1995) tested school-aged children (8–16 years) and also found stationary SSN to impact speech-recognition thresholds more negatively in ADHD participants than age-matched controls; furthermore, overall SIN performance covaried significantly with age.

A second possible outcome (O2) was that in certain conditions, the ADHD group would compensate for task difficulty by exerting more cognitive effort than controls. This is a commonly reported phenomenon in the ADHD literature, which typically manifests as equivalent performance to controls on a behavioral level but significantly different task-related activation patterns at the neural level (e.g., Suskauer et al., 2008 ; Biehl et al., 2016 ). Behavioral measures that are sensitive to individual differences in cognitive capacity are also used to reveal underlying differences in cognitive strategies, even though task-related differences are not observed at the group level. For instance, Michalek et al. (2014) tested predictions of the ELU model using 5-talker babble in a sample of young adults with and without ADHD. Although they did not observe a significant group difference in SIN performance (without the aid of visual cues), they found a significant relationship between measures of working memory capacity and SIN ability in their noisiest condition (0 dB SNR) for the ADHD group. The author’s concluded, in support of the ELU-model, that ADHD participants were relying more heavily upon working memory in this condition than controls in order to maintain a commensurate level of performance. When applying this prediction to the current study, potentially, ADHD adolescents would have sufficient spare capacity under less demanding conditions to exert compensatory strategies (cf. Rudner and Lunner, 2014 ). As such, we would not observe a significant group difference in SNRs when listening to CLR speech in energetic noise (hypothesized to be the least cognitively demanding, see above); but individual performance would still correlate highly with the cognitive measures. Logically, it follows from this outcome that the high cognitive demand of the NV speech would usurp ADHD participants’ limited cognitive capacity making it difficult to compensate at levels equivalent to controls. We should therefore observe significantly poorer performance in the ADHD group across all maskers for the NV condition.

Interestingly, one line of research offered a third potential outcome (O3) that is contrary to the predictions of the ELU model. This perspective suggests that ADHD is differentially affected by stationary stochastic noise (e.g., WN) and that the cognitive control system can benefit from this kind of noise stimulation. In their moderate brain arousal (MBA) model, Sikström and Söderlund (2007) argue that low levels of tonic dopamine (implicated in ADHD) place the brain in a poorly aroused state, which directly affects top-down processing due to an inability to filter out irrelevant sensory information, i.e., an abundance of sensory driven bottom-up input. Low levels of tonic dopamine yield inattention (e.g. Volkow et al., 2009 ). Stimulating the brain’s auditory system with an optimal level of external stationary noise is thought to increase arousal and subsequently enable efficient cognitive control through the mechanism of stochastic resonance where a certain amount of noise can facilitate neural transmission and interact with the target signal and thus make it stronger (for definition, see McDonnell and Ward, 2011 ). Söderlund and Jobs (2016) tested the MBA model’s predictions for SIN in a sample of schoolboys (9–10 years, ADHD vs. controls). They presented sentence materials in stationary SSN at a fixed level of 65 dB SPL. Under these conditions, the ADHD group’s resulting SNR for speech recognition was shown to be on par with that of controls. Because performance differences in quiet were significant between groups, the non-significant effect in noise was interpreted as an indication of stochastic resonance. The authors concluded that participants with ADHD can benefit from noise in the context of speech recognition, provided the noise is energetic and stochastic, and presented binaurally at a moderate intensity of 65–80 dB SPL. In the current experiment, the speech signal was fixed at 70 dB, and the initial SNR was 0 dB (presented binaurally). From the perspective of the MBA model, the WN should have a beneficial effect on ADHD participants’ speech perception; hence, they should perform at least as well as controls in both the CLR and NV conditions.

O1 and O2 present outcomes that are consistent with the ELU model’s prediction that noise masking impacts speech processing by placing increased demands upon top-down processing. The third prediction (O3) from the MBA model, however, conflicts with the ELU model in which it does not predict a negative effect of WN masking in ADHD but rather a contributory benefit to speech perception. It should be noted that although O3 predicted that ADHD participants would perform on par with controls, this finding would not be conclusive evidence that participants benefited from the noise (because participants may have exerted more effort as in O2 above). However, a replication of the finding in Söderlund and Jobs (2016) that applies even in the NV condition would certainly warrant further consideration for the MBA model. Still, more convincing evidence for a beneficial effect of WN would be revealed if ADHD participants demonstrated efficient performance at significantly lower SNRs than controls.

Materials and Methods

Participants.

The study was approved by the Regional Ethics Committee in Linköping, Sweden (Dnr 2016/169-31). Participants volunteered for the study through advertisements posted in schools, clinics, and online social media platforms. Volunteers who met the inclusion criteria were recruited for the study regardless of where they resided in Sweden. For both groups, the principal inclusion criteria were an age requirement of 11–18 years; Swedish as a first language; and the ability to make an informed decision about participation on one’s own accord. In addition, the inclusion requirements for the ADHD group were:

  • A formal diagnosis of ADHD according to Swedish interdisciplinary assessment standards ( Granholm et al., 2016 ) operating under the framework of DSM-5 ADHD criteria.
  • Children who were not treating their ADHD symptoms with prescription medication or children who were taking central stimulants but agreed to a 24-h washout from medication immediately prior to the day of participation.

At the time of data analysis, participants were excluded if the pure-tone audiogram indicated non-normal hearing acuity (see below), or they were unable to discern the speech materials in quiet at a threshold ≤60 dB SPL (the average sound-pressure level of conversational speech at a distance of 1 m). Because experimental assumptions required that the presence of ADHD symptoms were absent in the control group, control children were additionally excluded if their parent’s ratings on the SNAP-IV ADHD rating scale (see below) exceeded the 90th percentile for symptom scores pertaining to any DSM-5 subtype. In all, a total of 42 participants were recruited, four of which were excluded from analysis: three because speech-reception thresholds (SRTs) in quiet exceeded 60 dB and one control participant because SNAP-IV ratings indicated a high level of inattentiveness. The remaining 38 participants consisted of 22 controls ( M age = 16, SD age = 2.6, males = 8), and 16 ADHD participants ( M age = 14.6, SD age = 2.2, males = 10). See Table 1 and Figure 1 for further information regarding participants’ hearing, cognition, and symptom scores.

Group statistics for cognitive capacity and hearing in quiet tasks.

Table shows group means (standard deviations), F-statistics, and effect sizes (ω 2 ) for significant results (*p < 0.05, **p < 0.01, ***p < 0.001) .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-10-01536-g001.jpg

SNAP-IV parental ratings for ADHD symptoms per DSM-5 subtype and group. SNAP-IV scores range from 0 (no symptoms) to 3 (highly frequent symptoms). Boxplots represent min/max, interquartile range, and median.

Sound Materials

Sound materials were presented to participants using closed Sennheiser HD 205 headphones from a Windows laptop computer (64-bit OS, Intel® Core™ i7-4700MQ @ 2.4 GHz). All auditory stimuli were created in MATLAB and calibrated for the presentation hardware by the Department for Technical Audiology at the Linköping University. The speech materials were suitable for children ( Hjertman, 2011 ) and consisted of phonemically balanced Swedish sentences that were 3–7 words in length (the shortest sentence had a duration of 1.6 s and consisted of three words; the longest sentence had a duration of 3.4 s and consisted of four words). NV sentences were generated by first dividing the frequency range of non-distorted sentences into 12 logarithmically spaced channels before applying the amplitude envelope from each channel to band-limited noise within the same frequency band. Each band of noise was then recombined to create the noise NV sentences, which were adjusted with root-mean-squared equalization to match the sound levels of the original sentences. The WN masker consisted of a sound file with equally distributed frequencies (0–8 Hz). 2BAB was created by mixing the soundtracks of two native Swedish speakers (one male and one female) reading from a Swedish newspaper. The fluctuating SSN was constructed by modulating the SSN of the target speech with the low-pass filtered (<32 Hz) instantaneous amplitude of the 2BAB.

Test Procedure

Participants were tested in a quiet room/location at their place/town of residence (e.g., a secluded room in the home or the local library). All tests, with exception for the d2 test of attention (see below), were installed on a laptop computer and ran from a MATLAB platform. During auditory tests, participants were told to place themselves in a comfortable position and to concentrate upon the sound stimulus from the headphones. During cognitive tests, participants viewed the laptop computer screen on a table at a comfortable distance in front of them and responded to the tasks using left/right mouse buttons when required. The d2 test of attention was a pen-and-paper task, which participants performed seated at a table. Because the entire experimental session took circa 75 min to complete the test leader encouraged participants to take short breaks between tests if needed.

Auditory Tests

The auditory tests were administered in the following order: pure-tone audiogram, CLR speech in quiet, NV speech in quiet, then HINT. For all speech understanding tasks, sentence trials were marked as accurate if the participant could orally recite the entire sentence without error. The detailed procedure for each of these auditory measures is outlined below.

Hearing in Quiet

Participants were screened for normal hearing thresholds (<20 dB HL for the octave frequencies 0.25–8 kHz) using the standard, revised Hughson-Westlake approach on a MATLAB-based audiometer ( Cooke, 1999 ). Pure-tone averages were derived by calculating the grand mean of all octave frequency thresholds in both ears. Participants were also screened for the ability to understand both CLR and NV speech in quiet. Resulting SRTs represented the minimal level at which participants could correctly repeat the sentences two out of three times and were obtained using a descending approach from 70 dB SPL (−5, +2 dB). Together, the hearing in quiet tasks took circa 15 min to complete.

Hearing in Noise

For the HINT test ( Hällgren et al., 2006 ), CLR and NV sentence lists (20 unique sentences in each) were presented in WN, 2BAB, and fluctuating SSN. The speech signal was held at a constant of 70 dB SPL and the noise varied adaptively in steps of 2 dB from the initial SNR of 0 dB SPL. The resulting outcome measure was the mean SNR for 50% correctly repeated sentences and was estimated from the last 16 sentences in each list per experimental condition (the first four sentences were used as practice trials). The order of conditions (masker type per signal quality) was counterbalanced using a diagram balanced Latin squares protocol, and the entire test procedure had a duration of approximately 30–35 min.

Cognitive Measures

The extent of ADHD symptoms in participants was obtained from parents by way of the SNAP-IV rating scale, and cognitive control was assessed through Swedish versions of three different tasks: reading span, size-comparison span (SIC span), and d2 test of attention. In addition, participants’ efficiency in accessing lexical information in long-term memory was measured with a lexical decision task. The cognitive test battery was administered after completion of the HINT task in the following order: d2 test, reading span, SIC span, and lexical decision. The procedure for each of these cognitive measures is detailed below.

SNAP-IV ADHD Scale

The SNAP-IV parent/teacher questionnaire ( Swanson et al., 2012 ) is a neuropsychological 4-point scale (0 = not at all , 3 = very much ) designed to assess the extent a child expresses symptoms of hyperactivity/impulsivity and inattention associated with ADHD. Parental ratings were obtained with the 33-item Swedish version ( Dunerfeldt et al., 2010 ), and scores were calculated on the first 18 items corresponding to the ADHD subtypes specified in DSM-5: inattention (items 1–9), hyperactivity/impulsivity (items 10–18), and combined inattention hyperactivity (items 1–18). In clinical settings, parental scores exceeding the 95th percentile for each subtype are considered diagnostically relevant (attentive disorder ≥ 1.78 points, hyperactivity disorder ≥ 1.44 points, combined attentive and hyperactive disorder ≥ 1.67 points).

d2 Test of Attention

Proficiency in selective attention was assessed using the 4 min d2 test of attention ( Brickenkamp et al., 2010 ). The d2 test is a standardized neuropsychological test that requires participants to mark (pen stroke on paper) under time constraints, target characters embedded in strings of distractor characters (12 lines of 57 characters, 25–26 targets in each; 20 s allowed per line). The resulting score used in this study was the total number marked target characters minus the total number of commission and omission errors. The scores were transformed into standardized scores (min = 70, max = 130 points) according to age norms for which a higher score corresponded to greater proficiency in selective attention.

Reading Span

The reading span test ( Rönnberg et al., 1989 ) presented participants with eight unique lists of three-word sentences in increasing length (2 × lists of 2, 3, 4, and 5 sentences). Sentences within each list were presented one word at a time (interstimulus interval = 0.8 ms), and participants were required to both remember and classify ( yes/no button press) each presented sentence as sensical or absurd (e.g., Dogs bark loudly in contrast to Fish climb trees ). After each list presentation, participants were asked to orally recall either the first or the last word (determined pseudo randomly) of each sentence in the list. The resulting reading span measure was the % of correctly recalled words for correctly classified sentences (max = 28) and represents participants’ capacity to maintain and process information in working memory ( Rönnberg et al., 2016 ). The test took participants on average 8 min to complete.

Size-Comparison Span

The SIC-span test ( Sörqvist et al., 2010 ) presented 10 unique lists of target nouns together with distracting noun pairs of increasing length (2 × lists of 2, 3, 4, 5, and 6 items). All nouns in each list belonged to the same taxonomic category (mammals, fruit, etc.). Within each list, the task was to first answer a question ( yes/no button press) about the relative size of the noun pairs (e.g., are raspberries bigger than watermelons ?) and to remember a target noun (e.g., banana ) presented immediately after each size comparison. Because noun pairs belong to the same category as the target noun and must be processed in working memory, they are considered a semantic distraction to the memory task. At the end of each list, participants were asked to orally recall the target nouns. The resulting SIC-span measure was the % of correctly recalled targets corresponding to each correctly answered comparison (max = 40) and represents participants’ capacity to maintain and process relevant information and to inhibit competing semantic information in working memory. The entire task had a duration of approximately 10 min.

Lexical Decision

A lexical decision task ( Holmer et al., 2016 ) was used to examine lexical access efficiency. The task was to determine as quickly and as accurately as possible ( yes / no button press) if a string of three letters equated a real Swedish word or not. The test consisted of 40 items divided into three lists: 10 pseudowords (e.g., wox ), 10 nonwords (e.g., wxa ), and 20 real words of high familiarity (e.g., wax ); the presentation order was counterbalanced over all three lists. The 40-item list took all participants less than a minute to classify. The dependent measure was calculated by dividing the total number of correct responses per participant by the amount of time (seconds) spent responding on all trials; the resulting lexical decision score represented the number of correctly classified words per second ( Woltz and Was, 2006 ).

Data Analysis

Missing data.

In total, 6.7% of values were missing from the dataset of cognitive measures (listed above), which arose in most part from technical/procedural difficulties but also due to the occasional decision from a participant to terminate a task mid-session (often related to fatigue). When all variables were entered into a missing value analysis (implemented in IBM SPSS 25 statistical software), Little’s MCAR test ( Little and Rubin, 2002 ) revealed that the values were missing completely at random, χ 2 (10) = 7.3, p = 0.70. However when age was entered as a predictor, Little’s MCAR test was significant, χ 2 (13) = 403.2, p < 0.001, indicating that the pattern of missing values was not completely random but instead randomly distributed across age, a pattern termed “missing at random” (MAR; see Acock, 2005 for details). The expectation maximization method was therefore used to impute (iterations = 5,000) the missing items because it is suitable for datasets with MAR patterns and is a robust imputation method that produces a single, complete dataset with less bias then listwise/pairwise deletion or mean-replacement methods ( Acock, 2005 ). Additionally, a single value was missing from the dataset of auditory tests due to a participant’s decision to abstain from completing a particular hearing-in-noise condition. For consistency, the missing value was also replaced using the expectation maximization single-imputation method (iterations = 5,000). For all proceeding statistical analyses, the complete dataset with imputed values was used.

Statistical Analyses

All analyses were undertaken in IBM SPSS. Comparisons between groups in age, hearing thresholds in quiet (pure-tone averages, SRT), and cognitive measures were analyzed with a one-way ANOVA. The distribution of gender between groups was compared with a Fisher’s exact test. To assess HINT performance, a mixed repeated-measures ANCOVA was undertaken and included one grouping factor (ADHD, controls); two repeated measures: signal quality (CLR, NV) and masker type (WN, SSN, 2BAB); and age (mean centered) as the covariate. Bonferroni-corrected planned comparisons investigated differences between maskers (WN vs. SSN vs. 2BAB) for both CLR and NV speech and also group differences for each masker per speech-quality condition in accordance with predictions (H 6 ; O1–O3). Estimates of means and standard deviations (adjusted for the covariate) were reported, and partial-eta squared ( η p 2 ) was used to assess effect size. A two-step hierarchical multiple regression analysis was conducted to examine the relationship between overall HINT performance and cognitive capacity (H 3 ). Because the variables pertaining to working memory capacity (reading span and SIC span) and selective attention (d2 score) are thought to tap into the same underlying top-down construct, principle components analysis (PCA) was used to derive a single latent predictor which we denoted cognitive control . Similarly, in order to assess if differences in hearing acuity were also predictive of HINT outcomes, a single-latent predictor representing baseline acuity was derived from participants’ pure-tone averages and SRTs-in-quiet using PCA. The first step in the regression analysis included the hearing-in-quiet predictor, the second step added predictors corresponding to cognitive capacity: cognitive control and lexical access efficacy (lexical decision score) using the forced entry method (both predictors entered into the model in one step and in order of decreasing tolerance). The dependent measure was the grand mean over all HINT conditions for each participant. Model assumptions were assessed statistically.

Group Comparisons

By design, there were no significant differences between groups in age F (1, 36) = 3.0, p = 0.09, ω 2 = 0.05 or gender proportions ( p = 0.19, Fisher’s exact test). In addition, the presence of ADHD symptoms in the control group was negligible (all scores < 0.7) according to parental SNAP-IV ratings ( Figure 1 ). Table 1 reports the group means, standard deviations, and results of the one-way ANOVA for the cognitive capacity measures and hearing thresholds in quiet. As hypothesized (H 1 ), the ADHD group’s performance in working memory capacity, selective attention, and lexical access efficacy was significantly inferior to the control group.

The between-group comparison for pure-tone averages fell slightly under the threshold of significance ( p = 0.05), but for speech in quiet the ADHD group required on average a 4.5 dB increase in both signal-quality conditions to accurately repeat the sentences. To explore this difference in thresholds more closely, a Pearson’s correlation analysis ( two-tailed ) was undertaken in which the relationship between hearing thresholds and cognitive capacity was investigated. Results revealed that individual differences in pure-tone averages were moderately associated with the variance in both CLR ( r = 0.63, p < 0.001) and NV ( r = 0.48, p = 0.002) SRTs. In addition, pure-tone averages were negatively correlated with selective attention scores ( r = −0.40, p = 0.013) demonstrating that detecting a tone in quiet involves attentional mechanisms. No other correlations between hearing thresholds and cognitive capacity variables were observed.

HINT-Analysis

Results from the repeated-measures ANCOVA indicated that the covariate age was a strong predictor of SNRs, F (1, 35) = 8.9, p = 0.01, η p 2 = 0.20 (H 5 ). After controlling for age, there was a significant main effect of group, F (1, 35) = 21.3, p < 0.001, η p 2 = 0.38, confirming (H 2 ) that the ADHD group had greater difficulty than controls at understanding speech in noise ( Figure 2A ). There was also a main effect of masker type, F (2, 70) = 14.9, p < 0.001, η p 2 = 0.30 in which SNRs were lowest for 2BAB, closely followed by WN, and highest for fluctuating SSN ( Figure 2B ). In addition, a main effect of signal quality F (1, 35) = 366.2, p < 0.001, η p 2 = 0.91 confirmed (H 4 ) that NV speech was more difficult than CLR speech to understand in the presence of noise ( Figure 2C ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-10-01536-g002.jpg

Graphs show significant main effects for (A) group (controls vs. ADHD), (B) masker type (WN: white noise; SSN: fluctuating speech-shaped noise; 2BAB: two-talker babble), and (C) signal-quality manipulation (CLR: clear vs. NV: noise-vocoded speech). Positive and negative estimated marginal means of SNRs resemble a reduction (<70 dB) and increase (>70 dB) in noise levels, respectively. Error bars represent 95% confidence intervals.

A significant interaction between masker type and signal quality was also observed, F (2, 70) = 23.6, p < 0.001, η p 2 = 0.40 suggesting that performance associated with differences in signal quality was also differentially affected by masker type ( Figure 3 ). Bonferroni-adjusted statistical comparisons verified (H 6 ) that participants’ SNRs were differentially affected as a function of the perceptual overlap between masker and signal. As hypothesized, 2BAB was shown to have the highest masking effect on CLR speech, whereas WN had the least (2BAB > WN, p < 0.001; 2BAB > SSN, ns; SSN > WN, ns) and for NV speech, fluctuating SSN had the greatest masking effect and 2BAB had the least (SSN > 2BAB, p < 0.001; SSN > WN, p < 0.05; WN > 2BAB, p < 0.001). No other interactions were significant; instead, response profiles differed only in elevation with the ADHD group performing consistently poorer than controls.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-10-01536-g003.jpg

Results of between- and within-group comparisons per noise condition (WN: white noise; SSN: fluctuating speech-shaped noise; 2BAB: two-talker babble) and signal-quality manipulation (clear vs. noise-vocoded speech). Positive and negative estimated marginal means of SNRs resemble a reduction (<70 dB) and increase (>70 dB) in noise levels, respectively (the speech-signal was held at a constant of 70 dB). Asterisks indicate significant between-group differences for each masker type per signal-quality condition ( * p < 0.05, *** p < 0.001, Bonferroni corrected).

Results from the Bonferroni corrected group comparisons revealed that the ADHD group required significantly higher SNRs than controls in all maskers (as predicted in O1) except for SSN, where a statistical group difference was not apparent for the CLR-speech condition ( Figure 3 ). To investigate if this non-significant finding corresponded with the predictions from O2, a Pearson’s correlation analysis ( two-tailed ) was undertaken to determine if there was a relationship between individual differences in SNRs and cognitive capacity for this condition. Complex working memory (reading span, r = −0.46, p = 0.004; SIC span r = −0.40, p = 0.013) and selective attention ( r = −0.42, p = 0.009) were negatively correlated with SNRs. There was no significant association, however, between SNRs and lexical access efficacy ( r = −0.30, p = 0.063). This finding is in line with the prediction from O2 and suggests that participants with less efficient cognitive control found the task’s conditions more taxing even though efficiency in lexical processing was similar across participants for this condition.

Regression Analysis

Latent regressors.

Two principal component analyses were used to generate two latent regressors corresponding to cognitive control and baseline hearing acuity. The cognitive control regressor was derived from reading span, SIC span and d2 test variables, all of which were standardized by way of z -transformation prior to component extraction. Bartlett’s test of sphericity indicated that these three variables were sufficiently correlated, χ 2 (3) = 33.1, p < 0.001; and Kaiser-Meyer-Olkin’s test indicated that the sampling adequacy = 0.68 was reasonable. The extracted component had an eigenvalue = 2.1 and explained 70.5% of the variance. The baseline acuity regressor was compiled from the hearing-in-quiet measures, which were scaled ( z- transform) prior to component extraction: pure-tone averages, CLR-SRT, and NV-SRT. Bartlett’s test indicated sufficiently large correlations between the individual measures, χ 2 (3) = 32,4, p < 0.001; and Kaiser-Meyer-Olkin’s score = 0.69, which indicated adequate sampling adequacy. The resulting component had an eigenvalue = 2.1 and explained 70.5% of the variance. For both the cognitive control and baseline acuity components, the regression method was used to compute the factor scores for each participant.

Model Assumptions

Pearson’s cross correlation analysis ( Table 2 ) was used to assess the assumption of linearity and no perfect collinearity. All regressors had a significant linear association with the outcome variable (HINT). Although the predictor pertaining to cognitive control was moderately correlated with the other two regressors, variance inflation factors did not indicate high collinearity (VIF > 5) among the predictors (VIF max = 1.3). A non-significant Shapiro-Wilk test, W (38) = 0.98, p = 0.698; indicated that the distribution of residuals did not deviate from the assumption of normality and a non-significant Koenker’s BP test (LM = 0.75, p = 0.862) supported the assumption of homoscedastic residual variance. Cook’s distance measure also indicated that no single value had an excessive influence ( D > 1) over either regression model as a whole ( D max = 0.15).

Cross correlations (Pearson’s r, two-tailed ) between the outcome (HINT) and regressor variables (cognitive control, lexical access, and baseline acuity).

Asterisks indicate statistically significant correlation coefficients (*p < 0.05, **p < 0.01, ***p < 0.001) .

Regression Results

Table 3 reports the parameters for the two regression models. The first model indicated that baseline hearing acuity was a significant predictor of outcomes, F (1, 36) = 8.7, p = 0.006, and accounted for 19% of the variance in participant’s general SIN ability (adjusted R 2 = 0.17). The addition of the two cognitive predictors in the second model (cognitive control and lexical access efficacy) made a significant improvement in predictive power (see Table 3 ) and the resulting omnibus model, F (3, 34) = 25.7, p < 0.001 explained 69% of the variance in overall HINT performance (adjusted R 2 = 0.67). All predictors made significant contributions to the final model, of which cognitive control and lexical access predictors made contributions of relatively equal importance to model outcomes ( β = −0.41 and −0.47, respectively). Taken together, these results robustly support the hypothesis (H 3 ) that individual differences in cognitive capacity predict speech understanding performance under adverse listening conditions.

Two-step regression model for HINT outcomes and associated contributions of baseline hearing acuity, cognitive control, and lexical access efficacy.

Asterisks indicate statistically significant coefficients (*p < 0.05, **p < 0.01, ***p < 0.001) .

The purpose of the current study was to test the ELU model’s ( Rönnberg et al., 2013 ) general hypothesis that SIN should be more difficult for adolescents with ADHD than age-matched controls due to their deficient capacity to regulate attention and inhibition and maintain information in working memory. The experiment tested two types of signal quality (CLR speech vs. NV speech) under three different masking conditions (WN, amplitude-modulated SSN, and 2BAB) using HINT sentence materials ( Hällgren et al., 2006 ). These experimental manipulations were chosen because they have been shown to place differential demands upon top-down processing ( Stenfelt and Rönnberg, 2009 ; Rönnberg et al., 2010 , 2019 ). Participants were also assessed on cognitive capacity measures of complex working memory span, selective attention, and lexical access. Primary findings were in support of the ELU model. Results corresponding to the specific manipulations are discussed in detail below.

Effect of Age

The study was designed to test the ELU model’s cognitive (i.e., top-down) component associated with SIN processing. To minimalize confounds of auditory development, the experimental sample included adolescents between 11 and 18 years because maturity of the auditory system typically proceeds over the first decade of life ( Moore and Linthicum, 2007 ). In addition, the experiment controlled for differences in linguistic development by utilizing natural language speech materials appropriate for the age group being tested ( Hjertman, 2011 ).

Cognitive processes are known to mature during adolescence ( Luna, 2009 ; Peverill et al., 2016 ) and in support of the ELU-model’s predictions, studies have shown that younger teens require higher SNRs than older teens/adults when listening to speech in conditions that place demands on top-down processes ( Stuart, 2008 ; Jacobi et al., 2017 ). Accordingly, age was shown in the current study to significantly covary with HINT performance as anticipated. After controlling for age, significant main effects of masker type, signal quality, and group remained, which was indicative that the experimental conditions were placing differential demands upon individual participants’ cognitive capacity.

Effect of Signal Quality

Participants found listening to NV speech in noise more challenging than CLR speech and ADHD participants required higher SNRs than controls in all NV conditions. These results were expected because NV speech in noise is known to increase reliance upon top-down processing ( Rosemann et al., 2017 ), and hence, performance under such conditions should be impaired (relative to age-matched controls) in ADHD. Interestingly, ADHD participants’ perception of both CLR and NV speech was shown to be differential to controls even in quiet. It is, however, not uncommon in the literature that individuals with ADHD show impairments on central auditory processing tasks even though they have normal peripheral hearing (e.g., Gascon et al., 1986 ; Lanzetta-Valdo et al., 2016 ; Fostick, 2017 ). Indeed, a subject of some controversy is whether central auditory processing disorder is a distinguishable diagnosis from ADHD ( Moss and Sheiffele, 1994 ; Riccio and Hynd, 1996 ).

Central auditory processing tasks assess skills such as auditory closure, binaural integration, and temporal order judgment, all of which are necessary for efficient speech perception. In the current study, the variance in SRTs in quiet was not correlated with any of the cognitive capacity measures but instead associated with individual differences in pure-tone averages. Individual differences in pure-tone averages were, however, associated with attentional performance. Clearly, listening to a signal in decreasing levels of intensity involves attentional mechanisms; albeit for speech discrimination in quiet, behavioral measures that tap into domain-general selective attention may not be sufficiently sensitive to explain individual differences in performance. Speculatively, central auditory processing tasks may be more appropriate measures at detecting attentional impairments associated with fine-grained auditory discrimination in quiet. The poorer performance observed in the ADHD group in quiet may indeed be representative of the central auditory processing impairments that are commonly associated with ADHD in the literature.

Effect of Noise

Performance associated with differences in signal quality was also differentially affected by masker type, and the absence of group interactions demonstrated that the different types of maskers affected participants’ listening performance in a similar fashion. This finding robustly supports the ELU model’s predictions about the varying degrees these specific maskers tax top-down processes.

Masking can have both beneficial and negative effects on attentional performance. Positive masking effects arise when a masker drowns out the negative impact of other competing noises – for instance, the continuous hum from a ventilation system in an office may drown out potentially distracting voices in the surrounding environment and enable one to concentrate more efficiently to information in the immediate environment. Negative masking effects arise when the attended signal is masked by the noise. The current study has only investigated the effects of masking upon the speech signal, thus in the discussion that follows, the term “masking effects” refers to the latter case of a masked signal.

Two-Talker Babble

A variety of studies have shown that the adverse effects of informational masking are relative to the degree of perceptual similarity the masker has with the target speech ( Rosen et al., 2013 ). In the current study, the 2BAB masker consisted of non-distorted speech from two native Swedish speakers, and the target speech was either clear (CLR-condition) or distorted (NV-condition). Hence, the CLR speech was perceptually more similar to the non-distorted babble than the NV speech. When compared to the energetic maskers, the predicted difficulty associated with greater perceptual similarity between masker and target was evident. CLR speech was more challenging in 2BAB than in fluctuating SSN and static WN. Conversely, when the target speech was noise vocoded, the non-distorted 2BAB was shown to have a lesser masking effect than the other two maskers.

Interestingly, this confirmed effect of perceptual similarity held true only when comparing maskers. When signal-quality conditions are compared, the NV speech, despite being more perceptually distinct than the non-distorted chatter, proved to be more challenging in 2BAB to participants than CLR speech in 2BAB (95% CI of mean difference in SNR = 1.5, 3.7). Furthermore, in both signal-quality manipulations for 2BAB, ADHD participants’ performance was significantly inferior to controls. This result suggests that the process of both ignoring competing speech and attending to degraded speech is greatly more taxing on cognitive processes than suppressing the disturbance of competing speakers when listening to a target of high-acoustic quality. NV speech is frequently used in the literature to simulate cochlear implant processing, which, due to technological and biological limitations, results in a signal with high spectrotemporal degradation. The implication of this finding highlights the disadvantage in processing load cochlear implant users face in daily listening conditions (c.f. Overstreet and Hoen, 2018 ). Furthermore, in line with the ELU-model, this finding emphasizes that the coping advantage normal-hearing persons with high cognitive capacity have when processing degraded speech (e.g., from a loudspeaker, or poor phone connection) in acoustically crowded environments.

Fluctuating Speech-Shaped Noise

When listening to NV speech, fluctuating SSN had the greatest masking effect of all three maskers and ADHD participants needed higher SNRs than controls for speech understanding to be effective. The fluctuating masker together with CLR-speech did not result in a significant group difference in SNRs; however, further inspection confirmed that better task performance in this condition corresponded with proficiency in cognitive control. Stuart (2008) researched the effects of fluctuating maskers in children and adolescents in comparison with adults. General findings indicated that developing children were able to benefit from listening in the amplitude dips of the noise, but the process was thought to place greater reliance upon cognitive capacity. Thus, younger children (<14 years) tend to require higher SNRs than older adolescents/adults in fluctuating maskers, which coincide with their ongoing development of cognitive skills. Because the ADHD participants, in accord with their diagnosis, demonstrated poorer capacity on measures of attention and complex working memory, our finding, after controlling for the effects of age, is indicative that the ADHD group may have been using more cognitive effort than controls to solve the task of piecing together in working memory sparse glimpses of speech amidst the noise.

This finding aligns with the work of Michalek et al. (2014) who tested the predictions of the ELU model in adults with ADHD. The authors compared performance to controls in SIN tasks both with and without visual cues. The masker consisted of 5-talker babble. Studies have shown that when there are more than four background talkers, the temporal fine structure, and envelope of the masker starts to resemble fluctuating SSN ( Rosen et al., 2013 ). In their auditory only condition (i.e., without the aid of visual cues), Michalek et al. (2014) found their ADHD participants performed as well as controls in the nosiest (SNR = 0) condition and performance for the ADHD group correlated with measures of working memory capacity. The authors speculated under the framework of the ELU model that ADHD participants were exerting more cognitive effort in order to maintain a commensurate level of performance under the noisiest condition. Following through on this perspective, the significant difference between groups for the NV condition in our study is suggestive that the additional demands of the distortion rendered the ADHD group with insufficient capacity to solve the task at equivalent SNRs to controls. Thus, how accurately SIN is understood is intricately related to the individual’s available capacity to compensate for the degraded auditory processing (cf. Rudner and Lunner, 2014 ).

White Noise

As discussed above, when it comes to masking effects, a masker is more challenging to speech perception the more similar it is to the acoustic qualities of the signal. The results confirmed this general pattern. Although the continuous WN was perceptually similar to the NV speech, WN masking effects were not as severe as the fluctuating masker in which the spectrotemporal variance was even more similar to the NV signal. In addition, the expected pattern of masking effects was similar for both groups, albeit the ADHD group needed more favorable SNRs than controls across maskers for NV speech. In CLR-speech conditions, the WN was expected to have the least masking effect, which was also evident in our results. Unlike the fluctuating masker, however, there was a significant group difference in which the control group coped much better in higher levels of WN than the ADHD group.

One outcome explored in this study was whether continuous auditory WN could benefit the ADHD group as postulated by the MBA model ( Sikström and Söderlund, 2007 ). Our results are not in favor of the MBA model ( Sikström and Söderlund, 2007 ), which predicted that ADHD participants would perform at least as well as controls in WN due to the mechanism of stochastic resonance ( McDonnell and Ward, 2011 ). Stochastic resonance applies when the fidelity or the amplitude of an output signal from a suboptimal nonlinear system is enhanced by stochastic stationary noise (e.g., WN or stationary SSN), which improves the system’s representation of the input signal. In the case of auditory processing, two types of stochastic resonance have been observed ( Zeng et al., 2000 ; Behnam and Zeng, 2003 ): (1) threshold stochastic resonance wherein the signal is presented at levels below detection threshold, and the addition of noise amplifies the signal enabling it to be detected by the auditory system and (2) suprathreshold stochastic resonance where the signal is presented above the detectable threshold, and the addition of noise (at some optimal level) improves the fidelity of the signal and enhances fine temporal signal discrimination. The mechanism of stochastic resonance, however, is not yet fully understood, and whether the phenomenon of noise benefit acts merely at the perceptual level or at a top-down level, through the integration of neural activity from many sources, remains to be determined. If the latter is the case, positive effects in persons with ADHD should not be observed in SIN tasks that place lesser demands on cognitive processing. Our materials, however, were shown to involve cognitive processing in which proficiency on the HINT task improved as a function of cognitive capacity across participants. Furthermore, we specifically varied the cognitive demands of the task by manipulating signal quality, effectively enabling us to compare the effects of the maskers under two different levels load. We saw, however, no indication of noise benefit in the ADHD group in either load condition. This leaves open the question as to whether the mechanism of stochastic resonance can enhance signal processing at higher levels than the perceptual system in the context of auditory processing.

In a pilot study, Söderlund and Jobs (2016) tested the MBA model’s predictions for SIN in a sample of schoolboys (9–10 years) with and without ADHD. The authors hypothesized that children with ADHD would demonstrate poorer SRTs in quiet than the control group, but in stationary SSN, the ADHD group would benefit from the noise ( via suprathreshold stochastic resonance) and the variance in SRTs between groups would be neutralized. Their statistical results confirmed this hypothesized interaction of noise level (quiet vs. 65 dB WN) by group. Söderlund and Jobs (2016) concluded that in order for the beneficial effects of suprathreshold stochastic resonance to occur in the context of speech processing, the noise should be presented binaurally and at an intensity of 65–80 dB SPL. Our experiment presented all auditory materials binaurally and held the speech signal at a fixed 70 dB; the initial SNR was 0 dB and adjusted adaptively according to participants’ responses. Thus, our experimental conditions should have been sufficient to induce the mechanism of suprathreshold stochastic resonance, particularly in the NV condition in which the processing demands upon the cognitive system were increased. However, we did not observe any interactions with the WN masker. The ADHD group’s performance was significantly poorer to controls both in quiet and WN and for both signal-quality conditions. Hence, we did not observe evidence to suggest a WN benefit in the ADHD group given our experimental manipulations.

One possible reason for the discrepancy between studies is the differences in the speech materials used. HINT sentence materials differ from the Hagerman sentences ( Hagerman and Hermansson, 2015 ) used in Söderlund and Jobs (2016) in which they mimic more natural language processing and provide greater contextual support, which facilitates predictions of final words. Hagerman sentences on the other hand are based on a predictable grammatical structure (numeral + adjective + noun; e.g., six new pencils ), but the listener cannot derive/infer the content prior to its being heard. In a recent review of the ELU model, Rönnberg et al. (2019) elucidated that SIN is less demanding upon working memory maintenance when the sentence materials are high on contextual support and lexical predictability. This addition to the ELU model offers an alternative explanation for the results in Söderlund and Jobs (2016) . The use of Hagerman sentence materials may have prevented the possibility of a top-down advantage; i.e., the lack of contextual support may have impeded the possibility for children with more efficient/developed cognitive control to modulate the automaticity of inferential processes (c.f. Kiefer, 2007 ). Additionally, the axonal maturation in the superficial layers of the auditory cortex does not reach an equivalent density to that of adults until around 11 years ( Moore and Linthicum, 2007 ). Thus, the demands placed on working memory maintenance by Hagerman sentences in combination with masking effects may have impacted both auditory and cognitive processing in these children to such a degree that there was very little variance between groups in the presence of noise (i.e., a flooring effect).

A second alternative is that the ADHD participants were able to perform on par with controls by exerting more effort in order to solve the task in working memory. In such a scenario, the ELU model predicts that individual differences in SRTs would correlate with individual differences in measures of complex working memory and cognitive load ( Rönnberg et al., 2019 ). It is not reported in the pilot study ( Söderlund and Jobs, 2016 ) whether SRT variance across participants in noise is negatively correlated with the measures of working memory and attention that demonstrated significant group differences (ADHD < Controls). Nonetheless, the discrepancy in results between studies indicates that a far more detailed and controlled experimental design is necessary to provide conclusive evidence for beneficial suprathreshold stochastic resonance effects as opposed to negative masking effects for individuals with ADHD in the domain of SIN processing. Furthermore, the present group is more heterogenous with regard to age than earlier studies on noise benefit ( Söderlund et al., 2007 , 2016 ; Helps et al., 2014 ), so developmental differences across children with ADHD must also be considered.

Effect of Cognitive Capacity

A principal hypothesis of the ELU-model is that measures of cognitive capacity predict general SIN ability ( Rönnberg et al., 2013 ). Multiple regression analysis was used to test this hypothesis in which capacity measures of cognitive control (i.e., combined proficiency in selective attention, working memory maintenance and inhibition) and lexical access efficacy were predictors, and participant’s overall HINT performance was the outcome variable. In addition, measures of hearing acuity in quiet were included in the model to see if individual differences at baseline could account for some of the variance in noise. All predictors had a significant association with HINT performance and together accounted for 69% of the variance. Importantly, although participants’ baseline hearing acuity was a contributing predictor of outcomes, individual differences in cognitive control and lexical access efficacy proved to be far stronger determinants of SIN ability. In addition, the two cognitive regressors were shown to contribute to relatively equal importance to model predictions. These findings robustly support the ELU model, which underscores the crucial involvement of top-down mechanisms when understanding speech in adverse listening conditions.

Implications

The results of the current study have implications for our understanding of suitable classroom environments, and the types of solutions schools can employ to reduce listening effort in students with deficient cognitive capacity. Indeed, we have provided supportive evidence for the preliminary work of Schafer et al. (2013) who investigated whether normal-hearing children with autism spectrum disorder and ADHD could benefit from personal FM systems (frequency modulation systems) in the classroom. Their research utilized FM systems, which consisted of a small signal receiver fitted in the ear of the child that was paired with a microphone worn by the teacher and was designed to improve the SNR at the child’s ear without impeding sound stimulation from the natural environment. Schafer et al. (2013) noted that fitting autism spectrum disorder and ADHD participants with FM systems improved both SIN ability and listening behaviors in the classroom. Given that the current study observed that persons with ADHD required higher SNRs in noise conditions that are typical of classroom environments (e.g., background chatter, ventilation/fan noise), the use of personal FM systems (or other devices that can improve listening conditions by enhancing SNRs) may circumvent various behavioral problems associated with increased listening effort such as fatigue, distraction, and poor retention of information (cf. Peelle, 2018 ).

Additionally, the use of central stimulant medication has also been shown to improve both auditory processing and the subjective experience of listening effort and background noise disturbance in persons with ADHD ( Keith and Engineer, 1991 ; Freyaldenhoven et al., 2005 ; Lanzetta-Valdo et al., 2016 ). Although the use of stimulant medication in children is controversial and the long-term health risks are still being investigated (c.f. Curtin et al., 2018 ), our findings together with previous reports provide reasons to consider the need for stimulant medication as a means to improve cognitive performance and facilitate learning in school-aged children with top-down processing deficits.

Another implication from our results offers potential improvements to diagnostic procedures in relation to ADHD. The consistently poorer performance observed in the ADHD group, along with mounting reports that persons with ADHD generally demonstrate inferior performance to controls on auditory processing tasks, suggests that SIN tests may prove to be beneficial clinical instruments when diagnosing ADHD. In particular, we have shown that the sound stimuli can manipulate cognitive load without the confounds of additional conceptual processing that is frequently involved in other popular neuropsychological measures of cognitive control. For instance, numerous working memory tasks require mathematical abilities or a developed concept of ordinals/seriality (e.g., the Paced Auditory Serial Addition Test, Operation-Span Task and Digit-Span Memory Task). By utilizing auditory conditions designed to tax attention and working memory capacity (i.e., noise and signal-quality manipulations) together with easily processed information (i.e., highly familiar speech materials), SIN tasks may aid diagnosticians in identifying deficits specific to top-down processing. Further research in this regard is therefore encouraged.

Limitations

This study has several limitations. First, data collection had limited control over the test environment. Voluntary participants were recruited regardless of where they resided in Sweden. This entailed that the test leader travels to the participant and conducts testing in locations that were readily available to the participant. Even though care was taken to assure that the immediate environment was sufficiently quiet and isolated so as not interfere/override the stimuli presented in the headphones during testing, a controlled environment such as a soundproof lab, would have been more optimal. Second, the research was limited by a small sample size. Although we did aim to have a larger sample size, the actual response rate, particularly for the ADHD group, was much lower than anticipated for the time constraints of the project. A much larger sample would have allowed us to utilize more sophisticated analysis techniques (e.g., mixed modeling) and to pose more explorative questions about how the cognitive variables interact with individual participant’s performance in speech understanding. Third, only a scarce number of studies have researched the effects of background noise upon speech understanding in ADHD and they vary considerably in the types of test protocols employed (including speech and noise materials) and sample (i.e., age, gender, and sample size). This heterogeneity across studies renders comparisons of findings difficult. For these reasons, we refrained from postulating a priori hypotheses regarding the effects of specific noise types in our ADHD group and instead chose to explore several possible outcomes (O1–O3) based upon previous reports. Having observed consistently poorer performance for SIN in our sample of ADHD participants and noting that performance improved as a function of cognitive capacity across individuals, we argue for the necessity of replicative studies in order to refine our understanding of the extent in which deficits in attention and working memory impact speech processing.

To conclude, the large body of research that forms the ELU model emphasizes the important contribution of top-down mechanisms when listening to speech in adverse conditions. To test this assumption more thoroughly, the current study investigated whether processing SIN is more difficult for normal-hearing adolescents diagnosed with ADHD than their age-matched counterparts. Our results showed that ADHD participants had greater difficulty than controls at listening to clear and degraded speech – both in noise and in quiet. In addition, individual differences in cognitive capacity greatly determined participants’ proficiency with understanding SIN. These findings provide additional support for the ELU model and further highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech under challenging conditions.

Data Availability

Ethics statement.

This study was carried out in accordance with the recommendations of the Regional Ethics Committee in Linköping, Sweden. All participating minors were required to make an informed decision about participation on one’s own accord and their parents/guardians gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Regional Ethics Committee in Linköping, Sweden (Dnr 2016/169-31).

Author Contributions

JR, HD, and MR contributed to the conception and design of the study. RB was responsible for data collection, statistical analysis, drafting, and finalization of the manuscript. JR, HD, MR, and GS contributed to manuscript revision and approved the submitted version.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to extend a special thank you to Dr. Per Gustafsson and Dr. Andrea Johansson-Capusan for their clinical and ethical advice, as well as Dr. Ann Fristedt, Dr. Verena Åhlin, and Psykiatri Partners in Östergötland for their assistance in the recruitment of ADHD participants. We would like to thank Mathias Hällgren and Joakim Blomgren at the Technical Audiology Department for their assistance with the sound materials.

Abbreviations

Funding. The study was funded by the Swedish Research Council (2015-01917). Linköping University library funded the open access publication fee.

  • Acock A. C. (2005). Working with missing values . J. Marriage Fam. 67 , 1012–1028. 10.1111/j.1741-3737.2005.00191.x [ CrossRef ] [ Google Scholar ]
  • Behnam S. E., Zeng F.-G. (2003). Noise improves suprathreshold discrimination in cochlear-implant listeners . Hear. Res. 186 , 91–93. 10.1016/S0378-5955(03)00307-1, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Biehl S. C., Merz C. J., Dresler T., Heupel J., Reichert S., Jacob C. P., et al.. (2016). Increase or decrease of fMRI activity in adult attention deficit/hyperactivity disorder: does it depend on task difficulty? Int. J. Neuropsychopharmacol. 19 :pyw049. 10.1093/ijnp/pyw049, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brickenkamp R., Schmidt-Atzert L., Liepmann D. (2010). d2-R . (Germany: Hogrefe; ). [ Google Scholar ]
  • Buss E., Leibold L. J., Hall J. W. (2016). Effect of response context and masker type on word recognition in school-age children and adults . J. Acoust. Soc. Am. 140 , 968–977. 10.1121/1.4960587 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cooke M. (1999). The MATLAB auditory & speech demos project: Audiometer (version 2.1) . Sheffield: Department of Computer Science, University of Sheffield. Available at: http://spandh.dcs.shef.ac.uk/ed_arena/mad/mad/docs/whatfor.htm (Accessed January 9, 2019). [ Google Scholar ]
  • Curtin K., Fleckenstein A. E., Keeshin B. R., Yurgelun-Todd D. A., Renshaw P. F., Smith K. R., et al.. (2018). Increased risk of diseases of the basal ganglia and cerebellum in patients with a history of attention-deficit/hyperactivity disorder . Neuropsychopharmacology 43 , 2548–2555. 10.1038/s41386-018-0207-5, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dunerfeldt M., Elmund A., Söderström B. (2010). Bedömningsinstrument inom BUP i Stockholm . Kållered: Utvecklings- och utvärderingsenheten, Barn- och ungdomspsykiatri i Stockholms läns landsting. [ Google Scholar ]
  • Fostick L. (2017). The effect of attention-deficit/hyperactivity disorder and methylphenidate treatment on the adult auditory temporal order judgment threshold . J. Speech Lang. Hear. Res. 60 , 2124–2128. 10.1044/2017_JSLHR-H-16-0074, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Freyaldenhoven M. C., Thelin J. W., Plyler P. N., Nabelek A. K., Burchfield S. B. (2005). Effect of stimulant medication on the acceptance of background noise in individuals with attention deficit/hyperactivity disorder . J. Am. Acad. Audiol. 16 , 677–686. 10.3766/jaaa.16.9.5, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gascon G. G., Johnson R., Burd L. (1986). Central auditory processing and attention deficit disorders . J. Child Neurol. 1 , 27–33. 10.1177/088307388600100104 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Geffner D., Lucker J. R., Koch W. (1996). Evaluation of auditory discrimination in children with ADD and without ADD . Child Psychiatry Hum. Dev. 26 , 169–180. 10.1007/BF02353358, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Granholm G., Westin M., Malmberg K., Jarbin H. (2016). Riktlinje ADHD . Malmö, Stockholm, Uppsala: Svenska föreningen för barn- och ungdomspsykiatri. [ Google Scholar ]
  • Hagerman B., Hermansson E. (2015). Speech recognition in noise in 5-year-old normal-hearing children . Canad. J. Speech Lang. Pathol. Audiol. 39 , 52–60. Available at: https://www.cjslpa.ca/detail.php?ID=1171&lang=en (Accessed January 21, 2019). [ Google Scholar ]
  • Hällgren M., Larsby B., Arlinger S. (2006). A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition . Int. J. Audiol. 45 , 227–237. 10.1080/14992020500429583 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Helps S. K., Bamford S., Sonuga-Barke E. J. S., Söderlund G. B. W. (2014). Different effects of adding white noise on cognitive performance of sub-, normal and super-attentive school children . PLoS One 9 :e112768. 10.1371/journal.pone.0112768 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hjertman H. (2011). Utvärdering av svenska HINT-listor på normalhörande barn i åldrarna 6–11 år. Available at: https://gupea.ub.gu.se/handle/2077/27784 (Accessed December 25, 2018).
  • Holmer E., Heimann M., Rudner M. (2016). Imitation, sign language skill and the developmental ease of language understanding (D-ELU) model . Front. Psychol. 7 :107. 10.3389/fpsyg.2016.00107, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jacobi I., Sheikh Rashid M., de Laat J. A. P. M., Dreschler W. A. (2017). Age dependence of thresholds for speech in noise in normal-hearing adolescents . Trends Hear. 21 :2331216517743641. 10.1177/2331216517743641, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Keith R. W., Engineer P. (1991). Effects of methylphenidate on the auditory processing abilities of children with attention deficit-hyperactivity disorder . J. Learn. Disabil. 24 , 630–636. 10.1177/002221949102401006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kiefer M. (2007). Top-down modulation of unconscious “automatic” processes: a gating framework . Adv. Cogn. Psychol. 3 , 289–306. 10.2478/v10053-008-0031-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lanzetta-Valdo B., Oliveira G., Ferreira J., Palacios E. (2016). Auditory processing assessment in children with attention deficit hyperactivity disorder: an open study examining methylphenidate effects . Int. Arch. Otorhinolaryngol. 21 , 72–78. 10.1055/s-0036-1572526 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Litovsky R. (2015). Development of the auditory system . Handb. Clin. Neurol. 129 , 55–72. 10.1016/B978-0-444-62630-1.00003-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Little R., Rubin D. (2002). Statistical analysis with missing data. 2nd Edn . Hoboken, NJ: Wiley & Sons. [ Google Scholar ]
  • Luna B. (2009). Developmental changes in cognitive control through adolescence . Adv. Child Dev. Behav. 37 , 233–278. 10.1016/S0065-2407(09)03706-9, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCreery R. W., Spratford M., Kirby B., Brennan M. (2017). Individual differences in language and working memory affect children’s speech recognition in noise . Int. J. Audiol. 56 , 306–315. 10.1080/14992027.2016.1266703, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McDonnell M. D., Ward L. M. (2011). The benefits of noise in neural systems: bridging theory and experiment . Nat. Rev. Neurosci. 12 , 415–426. 10.1038/nrn3061, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Michalek A. M. P., Watson S. M., Ash I., Ringleb S., Raymer A. (2014). Effects of noise and audiovisual cues on speech processing in adults with and without ADHD . Int. J. Audiol. 53 , 145–152. 10.3109/14992027.2013.866282 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moore J. K., Linthicum F. H., Jr. (2007). The human auditory system: a timeline of development . Int. J. Audiol. 46 , 460–478. 10.1080/14992020701383019 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moss W. L., Sheiffele W. A. (1994). Can we differentially diagnose an attention deficit disorder without hyperactivity from a central auditory processing problem? Child Psychiatry Hum. Dev. 25 , 85–96. 10.1007/BF02253288, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mueller A., Hong D. S., Shepard S., Moore T. (2017). Linking ADHD to the neural circuitry of attention . Trends Cogn. Sci. 21 , 474–488. 10.1016/j.tics.2017.03.009, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Overstreet E., Hoen M. (2018). Cochlear implants: considerations regarding the relationship between cognitive load management and outcome . Hear. Rev. 25 , 33–35. Available at: http://www.hearingreview.com/2018/06/cochlear-implants-considerations-regarding-relationship-cognitive-load-management-outcome/ [ Google Scholar ]
  • Peelle J. E. (2018). Listening effort: how the cognitive consequences of acoustic challenge are reflected in brain and behavior . Ear Hear. 39 , 204–214. 10.1097/AUD.0000000000000494 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peverill M., McLaughlin K. A., Finn A. S., Sheridan M. A. (2016). Working memory filtering continues to develop into late adolescence . Dev. Cogn. Neurosci. 18 , 78–88. 10.1016/j.dcn.2016.02.004, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pievsky M. A., McGrath R. E. (2018). The neurocognitive profile of attention-deficit/hyperactivity disorder: a review of meta-analyses . Arch. Clin. Neuropsychol. 33 , 143–157. 10.1093/arclin/acx055, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pillsbury H. C., Grose J. H., Coleman W. L., Conners C. K., Hall J. W. (1995). Binaural function in children with attention-deficit hyperactivity disorder . Arch. Otolaryngol. Head Neck Surg. 121 , 1345–1350. 10.1001/archotol.1995.01890120005001, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riccio C. A., Hynd G. W. (1996). Relationship between ADHD and central auditory processing disorder: a review of the literature . Sch. Psychol. Int. 17 , 235–252. 10.1177/0143034396173001 [ CrossRef ] [ Google Scholar ]
  • Rönnberg J., Arlinger S., Lyxell B., Kinnefors C. (1989). Visual evoked potentials: relation to adult speechreading and cognitive function . J. Speech Hear. Res. 32 , 725–735. 10.1044/jshr.3204.725, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rönnberg J., Holmer E., Rudner M. (2019). Cognitive hearing science and ease of language understanding . Int. J. Audiol. 58 , 1–15. 10.1080/14992027.2018.1551631 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rönnberg J., Lunner T., Ng E. H. N., Lidestam B., Zekveld A. A., Sörqvist P., et al.. (2016). Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study . Int. J. Audiol. 55 , 623–642. 10.1080/14992027.2016.1219775, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rönnberg J., Lunner T., Zekveld A., Sörqvist P., Danielsson H., Lyxell B., et al.. (2013). The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances . Front. Syst. Neurosci. 7 :31. 10.3389/fnsys.2013.00031, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rönnberg J., Rudner M., Lunner T., Zekveld A. (2010). When cognition kicks in: working memory and speech understanding in noise . Noise Health 12 , 263–269. 10.4103/1463-1741.70505, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosemann S., Gießing C., Özyurt J., Carroll R., Puschmann S., Thiel C. M. (2017). The contribution of cognitive factors to individual differences in understanding noise-vocoded speech in young and older adults . Front. Hum. Neurosci. 11 :294. 10.3389/fnhum.2017.00294, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosen S., Souza P., Ekelund C., Majeed A. A. (2013). Listening to speech in a background of other talkers: effects of talker number and noise vocoding . J. Acoust. Soc. Am. 133 , 2431–2443. 10.1121/1.4794379 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubia K. (2018). Cognitive neuroscience of attention deficit hyperactivity disorder (ADHD) and Its clinical translation . Front. Hum. Neurosci. 12 :100. 10.3389/fnhum.2018.00100 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rudner M., Lunner T. (2014). Cognitive spare capacity and speech communication: a narrative overview . Biomed. Res. Int. 2014 , 1–10. 10.1155/2014/869726, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schafer E. C., Mathews L., Mehta S., Hill M., Munoz A., Bishop R., et al. (2013). Personal FM systems for children with autism spectrum disorders (ASD) and/or attention-deficit hyperactivity disorder (ADHD): an initial investigation . J. Commun. Disord. 46 , 30–52. 10.1016/j.jcomdis.2012.09.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sikström S., Söderlund G. B. W. (2007). Stimulus-dependent dopamine release in attention-deficit/hyperactivity disorder . Psychol. Rev. 114 , 1047–1075. 10.1037/0033-295X.114.4.1047, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Söderlund G. B. W., Björk C., Gustafsson P. (2016). Comparing auditory noise treatment with stimulant medication on cognitive task performance in children with attention deficit hyperactivity disorder: results from a pilot study . Front. Psychol. 7 :1331. 10.3389/fpsyg.2016.01331, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Söderlund G. B. W., Jobs E. N. (2016). Differences in speech recognition between children with attention deficits and typically developed children disappear when exposed to 65 dB of auditory noise . Front. Psychol. 7 :34. 10.3389/fpsyg.2016.00034, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Söderlund G. B. W., Sikström S., Smart A. (2007). Listen to the noise: noise is beneficial for cognitive performance in ADHD . J. Child Psychol. Psychiatry 48 , 840–847. 10.1111/j.1469-7610.2007.01749.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sörqvist P., Ljungberg J. K., Ljung R. (2010). A sub-process view of working memory capacity: evidence from effects of speech on prose memory . Memory 18 , 310–326. 10.1080/09658211003601530, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stenfelt S., Rönnberg J. (2009). The signal-cognition interface: interactions between degraded auditory signals and cognitive processes . Scand. J. Psychol. 50 , 385–393. 10.1111/j.1467-9450.2009.00748.x, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stuart A. (2008). Reception thresholds for sentences in quiet, continuous noise, and interrupted noise in school-age children . J. Am. Acad. Audiol. 19 , 135–146. quiz 191–192. 10.3766/jaaa.19.2.4, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Suskauer S. J., Simmonds D. J., Caffo B. S., Denckla M. B., Pekar J. J., Mostofwsky S. H. (2008). fMRI of intrasubject variability in ADHD: anomalous premotor activity with prefrontal compensation . J. Am. Acad. Child Adolesc. Psychiatry 47 , 1141–1150. 10.1097/CHI.0b013e3181825b1f [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Swanson J. M., Schuck S., Porter M. M., Carlson C., Hartman C. A., Sergeant J. A., et al.. (2012). Categorical and dimensional definitions and evaluations of symptoms of ADHD: history of the SNAP and the SWAN rating scales . Int. J. Educ. Psychol. Assess. 10 , 51–70. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4618695 (Accessed November 11, 2018). PMID: [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Takács Á., Kóbor A., Tárnok Z., Csépe V. (2014). Verbal fluency in children with ADHD: strategy using and temporal properties . Child Neuropsychol. 20 , 415–429. 10.1080/09297049.2013.799645 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Volkow N. D., Fowler J. S., Wang G. J., Baler R., Telang F. (2009). Imaging dopamine’s role in drug abuse and addiction . Neuropharmacology 56 , 3–8. 10.1016/j.neuropharm.2008.05.022 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Woltz D. J., Was C. A. (2006). Availability of related long-term memory during and after attention focus in working memory . Mem. Cogn. 34 , 668–684. 10.3758/BF03193587, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zeng F.-G., Fu Q.-J., Morse R. (2000). Human hearing enhanced by noise . Brain Res. 869 , 251–255. 10.1016/S0006-8993(00)02475-6, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. speech impediment infographic

    speech impediment adhd

  2. Speech Impediment Guide: Definition, Causes & Resources

    speech impediment adhd

  3. Speech Impediment Guide: Definition, Causes & Resources

    speech impediment adhd

  4. 6 Types of Speech Impediments

    speech impediment adhd

  5. 8 Ways To Help Your Kid With ADHD And Speech Delay

    speech impediment adhd

  6. Speech Impediment and Speech Impediment Types

    speech impediment adhd

VIDEO

  1. Speech therapy E 43 Opposite part 2 with Action

COMMENTS

  1. Does ADHD Affect Speech? What the Research Says

    ADHD is a neurodevelopmental disorder that can impact speech. People living with ADHD may have trouble organizing their thoughts well enough to express what they want to say. They can also miss ...

  2. Attention-Deficit/Hyperactivity Disorder (ADHD)

    An SLP can work with your child on speech, language, and social problems. The SLP can help your child learn how to plan and organize to get tasks done. Your child may need to learn how to take turns and pay attention when talking to others. The SLP will also work with your child's teacher to find ways to help them in class.

  3. Understanding Speech Problems in ADHD

    Children with Attention Deficit Hyperactivity Disorder (ADHD) are at risk for speech and language disorders, including difficulties with pragmatics, fluency, auditory processing, and other skills. Understanding the complex relationship between ADHD and speech can help parents and professionals effectively support a child's development.

  4. Navigating ADHD and Speech Delay: A Parent's Guide

    Attention Deficit Hyperactivity Disorder (ADHD) is more than just a collection of behaviors. It shapes how a child experiences the world, including their language skills. ... Parents play a vital role in supporting a young child with a speech or language disorder. When ADHD is present, fostering speech and language development requires a blend ...

  5. Does ADHD Cause Speech Issues?

    Speech. Those with ADHD are at a higher risk of developing articulation problems, which affect one's ability to produce certain letter sounds and meet certain speech milestones as they grow and develop. Differences in the vocal quality and fluency of speech are also common. In some cases, ADHD has been detected through these differences.

  6. Speech Therapy for ADHD: Why It's Important, How it Helps & More

    ADHD is a neurodevelopmental disorder that's often diagnosed in childhood. This means it's a condition that's caused by differences in the way the brain develops. ... This article will explore the many benefits of speech therapy for kids with ADHD. ADHD & Speech Problems: Understanding the Relationship. At its core, ADHD is an issue with ...

  7. Language Problems and ADHD Symptoms: How Specific Are the Links?

    1. Introduction. Symptoms of inattention and hyperactivity typically co-occur with poor communication skills [1,2] and low levels of literacy [3,4,5] in children with attention deficit hyperactivity disorder (ADHD).The same comorbidities are also found in the general school population [6,7] and in other neurodevelopmental conditions including autism spectrum disorder (ASD) [], specific ...

  8. Navigating The Relationship Between ADHD And Speech & Language

    One area that continues to intrigue me is the connection between Attention Deficit Hyperactivity Disorder (ADHD) and speech and language difficulties. ADHD, a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity, can have a profound impact on an individual's ability to communicate effectively.

  9. Types of Speech Impediments

    However, some speech disorders persist. Approximately 5% of children aged three to 17 in the United States experience speech disorders. There are many different types of speech impediments, including: Disfluency. Articulation errors. Ankyloglossia. Dysarthria. Apraxia. This article explores the causes, symptoms, and treatment of the different ...

  10. Does Speech Therapy Help with ADHD?

    Speech therapy for ADHD begins with an evaluation. The speech therapist will assess the areas of language development that may be posing challenges. Perhaps the issues are with expressive language, receptive language, or both. Each child and their needs are different. The speech therapist may also perform a formal assessment of social language ...

  11. Frequently Asked Questions About ADHD

    Find out answers to some commonly asked questions about ADHD, and how a speech therapist can help you overcome some of the challenges related to the disorder. (202) 579-4448. [email protected]. ... Attention deficit hyperactivity disorder, or ADHD, is a neurodevelopmental disorder. Some of the signs of ADHD include: Difficulty focusing;

  12. 6 Tips on Treating Adults With ADHD

    Nearly 8 million adults in the United States have attention deficit hyperactivity disorder (ADHD).These adults often find speech, social and executive function skills challenging, affecting their social communication. As speech-language pathologists, we can help people with ADHD through educating and coaching social communication skills and self-awareness.

  13. Does ADHD Affect Speech In Adults?

    ADHD stands for Attention Deficit Hyperactivity Disorder. ADHD is a brain disorder that causes those affected to have difficulty with attention, focus, self-control, self-regulation, and/or sitting still when it is required. There is no cure for this disorder, and most people with ADHD will be managing their symptoms for life.

  14. ADHD and Communication Disorders

    Attention deficit hyperactivity disorder (ADHD), one of the most common neurobehavioral disorders of childhood, exists across cultures. In the Diagnostic and Statistical Manual-5 taxonomy, ADHD is viewed as involving developmentally inappropriate degrees of inattention and hyperactive-impulsive behaviour.

  15. (PDF) Speech and Language Disorders in ADHD

    Baker and Cantwell [12] studied a sample of 65 3-16 year olds. who had been clinically diagnosed with ADHD; they found that. 17% had speech impairment, 22% had language impairment and. 61% had a ...

  16. Examining the comorbidity of language disorders and ADHD

    Family History of Speech, Language or Reading Disorder and ADHD. Although we cannot completely rule out the "by-product" model of comorbidity (ADHD is a by-product of LI, LI is a by-product of ADHD), our data suggest that ADHD and LI share a common etiology. As noted in an accompanying paper (Mueller & Tomblin, this issue), both disorders ...

  17. Speech-Sound Disorders and Attention-Deficit/Hyperactivity Disorder

    Tannock R. Attention deficit hyperactivity disorder: advances in cognitive, neurobiological and genetic research. Journal of Child Psychology and Psychiatry. 1998; 39:65-99. [Google Scholar] Tannock R, Schachar R. Executive dysfunction as an underlying mechanism of behavior and language problems in attention deficit hyperactivity disorder.

  18. Speech Impediment: Types in Children and Adults

    Common causes of childhood speech impediments include: Autism spectrum disorder: A neurodevelopmental disorder that affects social and interactive development. Cerebral palsy: A congenital (from birth) disorder that affects learning and control of physical movement. Hearing loss: Can affect the way children hear and imitate speech.

  19. Speech Impediment: Definition, Causes, Types & Treatment

    Speech impediment, or speech disorder, happens when your child can't speak or can't speak so people understand what they're saying. In some cases, a speech impediment is a sign of physical or developmental differences. Left untreated, a speech impediment can make it difficult for children to learn to read and write.

  20. Language Impairment in the Attention-Deficit/Hyperactivity Disorder

    Thus, the most common presentation of ADHD is ADHD comorbid with another disorder, with only one third of all clinical cases representing noncomorbid or pure ADHD (cf. Brown, 2000). Among the disorders reported to commonly co-occur within study samples of ADHD has been LI ( Cohen, Barwick, Horodezky, Vallance, & Im, 1998 ; Sciberras et al ...

  21. The Link Between ADHD and Stuttering

    One speech study revealed that 50% of the participants who stuttered also had ADHD. To better understand this link, let's take a closer look at stuttering and ADHD. What Is Stutter? Stutter refers to a type of speech impediment where the flow of communication gets disrupted. This results in broken speech.

  22. Featured news and headlines

    During the one-week program, seven speech-language pathology (SLP) master's students, one speech-language pathology doctoral student and seven audiology doctoral students traveled to local schools to conduct screenings and assessments. Team audiology conducted more than 100 pediatric and adult hearing screenings.

  23. British savings giant offers staff free testing for ADHD

    Disney and Phoenix Group have joined the growing list of companies offering to test workers for attention deficit hyperactivity disorder (ADHD) amid soaring NHS waiting lists. Phoenix, the UK's ...

  24. Speech Processing Difficulties in Attention Deficit Hyperactivity Disorder

    Participants found listening to NV speech in noise more challenging than CLR speech and ADHD participants required higher SNRs than controls in all NV conditions. ... Can we differentially diagnose an attention deficit disorder without hyperactivity from a central auditory processing problem? Child Psychiatry Hum. Dev. 25, 85-96. 10.1007 ...