These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation.
    Author: Zhang T, Dorman MF, Spahr AJ.
    Journal: Ear Hear; 2010 Feb; 31(1):63-9. PubMed ID: 20050394.
    Abstract:
    OBJECTIVES: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear. DESIGN: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband). RESULTS: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband. CONCLUSIONS: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.
    [Abstract] [Full Text] [Related] [New Search]