Amplification and Assistive Devices (AAD)
Linda Thibodeau
Professor
the University of Texas at Dallas USA
Dallas, Texas, United States
Rebekah Havens, BS
Student
The University of Texas at Dallas
Richardson, Texas, United States
Due to the COVID-19 pandemic, many students with hearing loss were required to learn virtually in their homes, while still receiving their public school accommodations, including remote microphone technology. The purpose of this project was to evaluate three virtual learning arrangements to determine acoustic characteristics of the transmitted signals. Measurements were made with three different web-based conferencing systems (Teams, Zoom, and Google Meet) and three listening arrangements (speakers, wireless connection, and remote microphone). The most variability in the acoustic signal occurred when speech was received through speakers of a laptop.
Summary:
Children with hearing loss face educational challenges in the classroom, as well as at home, because they do not have consistent access to sound across the speech frequencies, which are critical to expressive and receptive language development (Hoff & Naigles, 2002). Although there are numerous considerations that can impact access to input, compromised input can affect speech and language development (Ching & Dillon, 2013). The purpose of this project was to evaluate the devices used by children with hearing loss to determine the acoustic impact of virtual connections with assistive technology. A Phonak Audeo M90 RT was programmed with the following parameters: a 60 dB HL flat loss using the DSL v5 for a ten-year-old child.
A control condition was compared against three virtual learning arrangements that were run three times. KEMAR and a speaker were set up to represent a teacher educating in a typical classroom. KEMAR was placed zero degrees azimuth and three feet from a speaker in the sound booth that presented a 65 dB A speech signal. The speech signal was a female speaker saying “he found fresh flowers in the city” that was recorded through Adobe Audition software. KEMAR wore the hearing aid and was connected to a Dell Latitude laptop running Adobe Audition.
For the virtual listening arrangements, a two-room setup was utilized. In a sound booth, a speaker was placed zero degrees azimuth and three feet from the Lenovo personal computer with a web-conferencing application running. In a separate room, a MacBook Pro was used for all conditions as the student personal computer running the web-based conferencing systems and KEMAR was fit with an MFA (Made For All Phones) hearing aid. The signals received by the hearing aid on KEMAR was analyzed via a connection to a Dell Latitude laptop running Adobe Audition.
For condition one, the signal was received by the hearing aid worn by KEMAR via the speakers of the computer. For condition two, the signal was received via a Bluetooth connection between hearing aid and the computer. For the final condition, the signal was received via a digital modulation connection between the hearing aid and a Phonak Roger Touchscreen, which was hard-wired to the computer. All three conditions were run with three web-based conferencing applications: Teams, Google Meet, and Zoom.
The results were analyzed by calculating the difference in acoustic output across the frequency range between the control condition and each of the listening arrangements. The least change in output in condition three when remote microphone technology was used. These results suggest a need for audiological management in virtual learning environments to ensure assistive technology can be used to maximize the acoustic signal for instruction.
Ching, T. Y., & Dillon, H. (2013). Major findings of the LOCHI study on children at 3 years of age and implications for audiological management. International Journal of Audiology, 52(Suppl. 2), S65–S68.
Hoff, E., & Naigles, L. (2002). How children use input to acquire a lexicon. Child Development, 73(2), 418–433