These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Recruitment of fusiform face area associated with listening to degraded speech sounds in auditory-visual speech perception: a PET study.
    Author: Kawase T, Yamaguchi K, Ogawa T, Suzuki K, Suzuki M, Itoh M, Kobayashi T, Fujii T.
    Journal: Neurosci Lett; 2005 Jul 15; 382(3):254-8. PubMed ID: 15925100.
    Abstract:
    For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory-visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the "face area") was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory-visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.
    [Abstract] [Full Text] [Related] [New Search]