These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Behavioral Response Modeling to Resolve Listener- and Stimulus-Related Influences on Audiovisual Speech Integration in Cochlear Implant Users.
    Author: Salagovic CA, Stevenson RA, Butler BE.
    Journal: Ear Hear; 2024 Dec 11; ():. PubMed ID: 39660814.
    Abstract:
    OBJECTIVES: Speech intelligibility is supported by the sound of a talker's voice and visual cues related to articulatory movements. The relative contribution of auditory and visual cues to an integrated audiovisual percept varies depending on a listener's environment and sensory acuity. Cochlear implant users rely more on visual cues than those with acoustic hearing to help compensate for the fact that the auditory signal produced by their implant is poorly resolved relative to that of the typically developed cochlea. The relative weight placed on auditory and visual speech cues can be measured by presenting discordant cues across the two modalities and assessing the resulting percept (the McGurk effect). The current literature is mixed with regards to how cochlear implant users respond to McGurk stimuli; some studies suggest they report hearing syllables that represent a fusion of the auditory and visual cues more frequently than typical hearing controls while others report less frequent fusion. However, several of these studies compared implant users to younger control samples despite evidence that the likelihood and strength of audiovisual integration increase with age. Thus, the present study sought to clarify the impacts of hearing status and age on multisensory speech integration using a combination of behavioral analyses and response modeling. DESIGN: Cochlear implant users (mean age = 58.9 years), age-matched controls (mean age = 61.5 years), and younger controls (mean age = 25.9 years) completed an online audiovisual speech task. Participants were shown and/or heard four different talkers producing syllables in auditory-alone, visual-alone, and incongruent audiovisual conditions. After each trial, participants reported the syllable they heard or saw from a list of four possible options. RESULTS: The younger and older control groups performed similarly in both unisensory conditions. The cochlear implant users performed significantly better than either control group in the visual-alone condition. When responding to the incongruent audiovisual trials, cochlear implant users and age-matched controls experienced significantly more fusion than younger controls. When fusion was not experienced, younger controls were more likely to report the auditorily presented syllable than either implant users or age-matched controls. Conversely, implant users were more likely to report the visually presented syllable than either age-matched controls or younger controls. Modeling of the relationship between stimuli and behavioral responses revealed that younger controls had lower disparity thresholds (i.e., were less likely to experience a fused audiovisual percept) than either the implant users or older controls, while implant users had higher levels of sensory noise (i.e., more variability in the way a given stimulus pair is perceived across multiple presentations) than age-matched controls. CONCLUSIONS: Our findings suggest that age and cochlear implantation may have independent effects on McGurk effect perception. Noisy encoding of disparity modeling confirms that age is a strong predictor of an individual's prior likelihood of experiencing audiovisual integration but suggests that hearing status modulates this relationship due to differences in sensory noise during speech encoding. Together, these findings demonstrate that different groups of listeners can arrive at similar levels of performance in different ways, and highlight the need for careful consideration of stimulus- and group-related effects on multisensory speech perception.
    [Abstract] [Full Text] [Related] [New Search]