These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Factors Affecting Bimodal Benefit in Pediatric Mandarin-Speaking Chinese Cochlear Implant Users. Author: Liu YW, Tao DD, Chen B, Cheng X, Shu Y, Galvin JJ, Fu QJ. Journal: Ear Hear; 2019; 40(6):1316-1327. PubMed ID: 30882534. Abstract: OBJECTIVES: While fundamental frequency (F0) cues are important to both lexical tone perception and multitalker segregation, F0 cues are poorly perceived by cochlear implant (CI) users. Adding low-frequency acoustic hearing via a hearing aid in the contralateral ear may improve CI users' F0 perception. For English-speaking CI users, contralateral acoustic hearing has been shown to improve perception of target speech in noise and in competing talkers. For tonal languages such as Mandarin Chinese, F0 information is lexically meaningful. Given competing F0 information from multiple talkers and lexical tones, contralateral acoustic hearing may be especially beneficial for Mandarin-speaking CI users' perception of competing speech. DESIGN: Bimodal benefit (CI+hearing aid - CI-only) was evaluated in 11 pediatric Mandarin-speaking Chinese CI users. In experiment 1, speech recognition thresholds (SRTs) were adaptively measured using a modified coordinated response measure test; subjects were required to correctly identify 2 keywords from among 10 choices in each category. SRTs were measured with CI-only or bimodal listening in the presence of steady state noise (SSN) or competing speech with the same (M+M) or different voice gender (M+F). Unaided thresholds in the non-CI ear and demographic factors were compared with speech performance. In experiment 2, SRTs were adaptively measured in SSN for recognition of 5 keywords, a more difficult listening task than the 2-keyword recognition task in experiment 1. RESULTS: In experiment 1, SRTs were significantly lower for SSN than for competing speech in both the CI-only and bimodal listening conditions. There was no significant difference between CI-only and bimodal listening for SSN and M+F (p > 0.05); SRTs were significantly lower for CI-only than for bimodal listening for M+M (p < 0.05), suggesting bimodal interference. Subjects were able to make use of voice gender differences for bimodal listening (p < 0.05) but not for CI-only listening (p > 0.05). Unaided thresholds in the non-CI ear were positively correlated with bimodal SRTs for M+M (p < 0.006) but not for SSN or M+F. No significant correlations were observed between any demographic variables and SRTs (p > 0.05 in all cases). In experiment 2, SRTs were significantly lower with two than with five keywords (p < 0.05). A significant bimodal benefit was observed only for the 5-keyword condition (p < 0.05). CONCLUSIONS: With the CI alone, subjects experienced greater interference with competing speech than with SSN and were unable to use voice gender difference to segregate talkers. For the coordinated response measure task, subjects experienced no bimodal benefit and even bimodal interference when competing talkers were the same voice gender. A bimodal benefit in SSN was observed for the five-keyword condition but not for the two-keyword condition, suggesting that bimodal listening may be more beneficial as the difficulty of the listening task increased. The present data suggest that bimodal benefit may depend on the type of masker and/or the difficulty of the listening task.[Abstract] [Full Text] [Related] [New Search]