Neuro-Audiology (NA)
Stacy Garrard
Student
University of Louisville
Louisville, Kentucky, United States
Shae D. Morgan, AuD, PhD
Assistant Professor
University of Louisville
Louisville, Kentucky, United States
Objectives: Pupillometry, used to measure changes in pupil dilation, can assess cognitive effort expended during speech perception. Among other things, pupil dilation can be used to assess the cognitive load (often termed listening effort) required by both the auditory system and autonomic nervous system (Winn et al., 2018) during a speech processing task. Higher speech intelligibility (percent word or sentence recognition) is correlated with increased pupil dilation (Miles et al., 2017), especially when listening in noise. However, not all aspects of speech perception and processing are captured using a percent correct function. For example, in addition to correctly hearing the words a talker says, listeners also simultaneously make social inferences from the target speech (e.g., the talker’s intention, emotional state, age, gender, etc; Winn et al., 2018). This study focuses on comparing amount of cognitive effort induced when perceiving what was said (sentence recognition) versus how the person was talking (emotion recognition). We hypothesized that pupil dilation will increase for emotional stimuli (due to stimulus arousal) and when the listener is instructed to pay attention to the emotional state of the talker, due to a combination of the stimulus arousal and effort involved in making a social, emotional judgment of the talker. Further, we predicted that reporting both the words in the sentence and the emotional state of the talker would result in the largest pupil dilations, as a dual-task cost of simultaneous speech processing along two domains should require more effort than single-task processing.
Design: Young adult listeners (18-35 years old) with normal hearing and no history of neurological or emotional disorders completed two emotional questionnaires: Positive and Negative Affect Schedule (PANAS) and the Emo-CHEQ. Participants were asked to listen to sentences taken from the MESS database (Morgan, 2019) and either repeat the sentence in its entirety, state their perceived emotion of the talker from a list of three emotion options and one non-emotional control option (Angry, Happy, and Sad, or Neutral), or both. Each task (emotion vs sentence recognition vs both) was presented in quiet.
Results: The data suggest increased listening effort for emotional stimuli compared to neutral stimuli across all task types. Pupil dilation also increased when participants were instructed to report the perceived emotional state of the talker (i.e., the emotion recognition task and the simultaneous emotion and sentence recognition task).
Conclusions: Results demonstrate that the presence of emotion in auditory stimuli resulted in greater listening effort compared to non-emotional stimuli. Further, asking an individual to judge the emotional state of a talker was more cognitively demanding than asking them to report the words said. Lastly, simultaneous sentence and emotion recognition (more representative of every day interaction) was the most demanding condition tested. Actual cognitive demands during speech perception tasks may be underestimated when using emotionally neutral stimuli, and when using only word recognition tasks that do not account for simultaneous processing of social information alongside word recognition.