Research (R)
Trisha Karnik
Undergraduate Research Assistant
Baylor University
Waco, Texas, United States
Sydney Dukes, n/a
Baylor University, United States
Carrie Drew, AuD,
Clinical Faculty
Baylor University
Waco, Texas, United States
George Whitaker, AuD
Baylor Scott & White Medical Center
Temple, Texas, United States
Yang-Soo Yoon, PhD
Assistant professor
Baylor University
Waco, Texas, United States
Existing bimodal and electric acoustic stimulation (EAS) studies suggest highly mixed results regarding frequency fitting maps. In this study, we evaluated various frequency maps on the benefit of bimodal and EAS hearing in sentence perception using acoustic simulation. Results indicated that the optimal map was similar across bimodal and EAS hearing configurations but was influenced by the upper frequency bounds of residual hearing in the acoustic ear. Results also showed that bimodal and EAS benefit in sentence perception is also similar regardless of signal-to-noise ratio (SNR).
Rationale/Purpose
Speech understanding with a cochlear implant (CI) and a hearing aid (HA) on either opposite ears (i.e., bimodal) or the same ear (i.e., EAS) produces a considerable synergistic effect. While many bimodal and EAS users experience significant benefit, others gain little or no benefit or even experience interference. One potential contributor to this variability is that the effect of different degrees of residual hearing thresholds in HA ears on the maps has not been seriously considered. Another challenging aspect of previous bimodal and EAS studies is the difficulty of precisely interpreting the results due to different testing, audiologic components, demographic variables, bandwidths, filtering cutoff frequencies, and slopes for either HA or CI ear. Each of these factors precludes controlled comparisons in real bimodal and EAS patients. In this study, using simulations of bimodal and EAS hearing with normal hearing (NH) listeners, we determined the optimal frequency maps, based on sentence perception scores by adjusting acoustic and electric boundary frequencies.
Method
Twelve adult NH listeners were recruited for sentence perception in noise under both bimodal and EAS hearing. Three acoustic hearing losses were created using band-pass filters (A50-A250, A50-A500, or A50-A750 Hz). For electric simulation, an 8-channel sinewave vocoder was used with a fixed output frequency range (1000-7938 Hz) and four different input frequency ranges to create typical bimodal and EAS frequency maps: full overlap, narrow overlap, meet, and gap, relative to the upper cutoff frequencies of the acoustic simulation. The full overlap map was created using a fixed frequency range (188-7938 Hz). The narrow overlap map was created by decreasing upper cutoff frequencies of the acoustic stimulation by 10% (e.g., 225-7938 Hz for the A50-A250), while the gap map was created by increasing the upper cutoff frequencies of the acoustic stimulation by 50% (e.g., 375-7938 Hz for the A50-A250). Sentence perception was measured in noise with acoustic alone, electric alone, and combined acoustic and electric stimulations as a function of frequency map and SNR for each acoustic hearing loss. For the bimodal hearing, acoustic and electric stimulation were delivered to opposite ears, whereas for the EAS hearing, both acoustic and electric stimulations were presented to the same ear
Results and Conclusions