These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Audibility-index predictions of normal-hearing and hearing-impaired listeners' performance on the connected speech test.
    Author: Sherbecoe RL, Studebaker GA.
    Journal: Ear Hear; 2003 Feb; 24(1):71-88. PubMed ID: 12598814.
    Abstract:
    OBJECTIVE: In a previous study (Sherbecoe & Studebaker, 2002), we derived a frequency-importance function and a transfer function for the audio compact disc version of the Connected Speech Test (CST). The current investigation evaluated the validity of these audibility-index (AI) functions based on how well they predicted data from four published studies that presented the CST to normal-hearing and hearing-impaired subjects. DESIGN: AI values were calculated for the test conditions received by 78 normal-hearing and 72 hearing-impaired subjects from the selected studies. The observed CST scores and AI values for these conditions/subjects were then plotted and the dispersion of the data compared to the expected range based on critical differences. The AI values for the conditions/subjects were also converted into expected CST scores and subtracted from their corresponding observed scores to determine the distribution of the resulting difference scores and the relationship between the difference scores and subject age. RESULTS: Good predictions were obtained for normal-hearing subjects who had been tested under audio-only conditions but not those who had received audiovisual tests. The expected scores for the latter subjects were too low when the AI accounted only for audibility and too high when it included the correction for visual cues from ANSI S3.5-1997. All of the hearing-impaired subjects had been tested under audio-only conditions. In their case, the mean difference between the observed and the expected scores was comparable with the audio-only mean for the normal-hearing subjects when the AI included corrections for speech level distortion and hearing loss desensitization. However, the hearing-impaired subject data had greater variability. The predictions for these subjects also decreased in accuracy when subject age increased beyond 70 yr despite the application of an AI correction for age. CONCLUSIONS: The results of this study suggest that the AI functions derived for the CST satisfactorily predict the scores of normal-hearing subjects when they listen in speech babble under audio-only conditions but not when they receive visual cues. To obtain accurate predictions for the audiovisual form of the CST, it will be necessary to develop new ANSI-style AI correction equations for visual cues or new AI functions based on audiovisual test scores. If the current AI functions are used to predict the scores of hearing-impaired listeners tested under audio-only conditions, the AI should include corrections for the effects of speech level and hearing loss. A correction for subject age also could be applied, if it seems appropriate to do so. In either case, however, the predictions are still likely to be less accurate than the predictions for normal-hearing subjects. This may be because speech recognition deficits in people with hearing loss are not due solely to diminished audibility. Hearing-impaired subjects, particularly if they are elderly, also may be more susceptible to masking effects or other factors not accounted for by the AI.
    [Abstract] [Full Text] [Related] [New Search]