These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Music training is associated with better audio-visual integration in Chinese language.
    Author: Ju P, Zhou Z, Xie Y, Hui J, Yang X.
    Journal: Int J Psychophysiol; 2024 Sep; 203():112414. PubMed ID: 39134177.
    Abstract:
    In the present study, we aimed to investigate whether long-term music training could improve audio-visual speech integration in Chinese, using event-related brain potential (ERP) measurements. Specifically, we recruited musicians and non-musicians to participate in our experiment where visual Chinese characters were presented simultaneously with congruent or incongruent speech sounds. In order to maintain participants' focus on both auditory and visual modalities, they were instructed to perform a probe detection task. Our study revealed that for the musicians, audiovisual incongruent stimuli elicited larger N1 and N400 amplitudes compared to audiovisual congruent stimuli. Conversely, for the non-musicians, only a larger N400 amplitude was observed for incongruent stimuli relative to congruent stimuli, without a significant difference in N1 amplitude. Furthermore, correlation analyses indicated that more years of music training was associated with a larger N1 effect for the musicians. These results suggest that musicians were capable of detecting character-speech sound incongruence at an earlier time window compared to non-musicians. Overall, our findings provide compelling evidence that music training is associated with better integration of visual characters and auditory speech sounds in language processing.
    [Abstract] [Full Text] [Related] [New Search]