Adult Diagnostic (AD)
Yunfang Zheng, Sc.D., MD
Associate professor
Central Michigan University
Mount Pleasant, Michigan, United States
Jianwei Guan
Michigan, United States
Joan Besing, PhD
Professor and Program Director
Montclair State University
Bloomfield, New Jersey, United States
The ability to hear and understand speech is crucial to functioning in everyday life, so the impact of hearing loss (HL) at any level is often quite significant, especially in noisy and reverberant environments (Picou, Gordon, & Ricketts, 2016), and aging brings negative effect on speech perception (Helfer and Wilber, 1990). Studies have found that hearing aids (HAs) improved speech understanding in noise (Miller & Watson, 2007), and reduced listening effort (Ahlstrom, Horwitz, & Dubno, 2013), but speech perception decreased with increasing reverberation time even under aided condition (Nabelek & Pickett, 1974). However, current literature does not include various noise and reverberation for different HLs and how using HAs will impact speech perception in these environments. This study investigated how elders with different degrees of HL understand speech in everyday listening environments while using HAs.
Elders (60&+ yrs) with NH, mild-, moderate-, moderately-severe-, and severe-SNHL were recruited. Participants with HL were fit with Phonak Audeo 90M-RT binaurally and verified through a real-ear-measurement system. All participants listened to phonetically-balanced monosyllabic words, randomly selected without replacement from a subset of Egan’s word list (1948), which were presented via circumaural earphones at most comfortable level from simulated 0° azimuth. Each participant completed the test in quiet, and at 12dB, 8dB, 4dB, 0dB, & -4dBSNR in anechoic and reverberant (RT60 of 0.2s,0.4s,0.6s,&0.9s) environments. Speech-spectrum-noise was also presented from front; its level was changed to achieve different SNRs. Conditions were tested at least twice for consistency in random order, with 20 words in each condition. Phoneme percent correct was recorded for each word responded and then were combined in each condition to indicate the phoneme recognition score (PRS). The number of correct words in each condition was converted to percent correct for word recognition score (WRS).
Data analysis included multivariate analysis with repeated measures and post-hoc Tukey Honestly Significant Difference. Results revealed that WRS and PRS decreased as SNR decreased and as reverberation time increased for all groups. The significant change in both scores occurred at 4~12dB/RT0.4s, 4dB/RT0.2~0.4s, 0~8dB/RT0.4s, 4~8dB/RT0.4~0.6s, & 4~8dB/0.4~0.6s for NH through severe groups, respectively. HAs had significant effect on both recognitions (p< .0001). NH and aided-mild HL groups had significantly higher recognition scores compared to other aided groups (p< .0001). There was no significant difference on WRS or PRS for NH and mild-aided groups and among other aided groups. When comparing these results to unaided results (Zheng, et al., AAA2020), aided recognition scores were significantly higher than those without HAs, and HL effects were diminished with HAs, further confirming hearing aid benefit in noise and reverberation. There was a significant interaction between noise and reverberation (p< .0001) in both recognitions. WRS and PRS were poorer in some combined conditions than either noise or reverberant condition alone, at more reverberant and noisy conditions.
This study provides useful information concerning binaural aided speech recognition ability for listeners with different HLs, which will be useful for counseling and setting realistic expectations helping patients establish a higher quality of life and satisfaction with audiological services.