These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: New developments in speech pattern element hearing aids for the profoundly deaf. Author: Faulkner A, Walliker JR, Howard IS, Ball V, Fourcin AJ. Journal: Scand Audiol Suppl; 1993; 38():124-35. PubMed ID: 8153558. Abstract: Two new developments in speech pattern processing hearing aids will be described. The first development is the use of compound speech pattern coding. Speech information which is invisible to the lipreader was encoded in terms of three acoustic speech factors; the voice fundamental frequency pattern, coded as a sinusoid, the presence of aperiodic excitation, coded as a low-frequency noise, and the wide-band amplitude envelope, coded by amplitude modulation of the sinusoid and noise signals. Each element of the compound stimulus was individually matched in frequency and intensity to the listener's receptive range. Audio-visual speech receptive assessments in five profoundly hearing-impaired listeners were performed to examine the contributions of adding voiceless and amplitude information to the voice fundamental frequency pattern, and to compare these codings to amplified speech. In both consonant recognition and connected discourse tracking (CDT), all five subjects showed an advantage from the addition of amplitude information to the fundamental frequency pattern. In consonant identification, all five subjects showed further improvements in performance when voiceless speech excitation was additionally encoded together with amplitude information, but this effect was not found in CDT. The addition of voiceless information to voice fundamental frequency information did not improve performance in the absence of amplitude information. Three of the subjects performed significantly better in at least one of the compound speech pattern conditions than with amplified speech, while the other two performed similarly with amplified speech and the best compound speech pattern condition. The three speech pattern elements encoded here may represent a near-optimal basis for an acoustic aid to lipreading for this group of listeners. The second development is the use of a trained multi-layer-perceptron (MLP) pattern classification algorithm as the basis for a robust real-time voice fundamental frequency extractor. This algorithm runs on a low-power digital signal processor which can be incorporated in a wearable hearing aid. Aided lipreading for speech in noise was assessed in the same five profoundly hearing-impaired listeners to compare the benefits of conventional hearing aids with those of an aid which provided MLP-based fundamental frequency information together with speech+noise amplitude information. The MLP-based pattern element aid gave significantly better performance in the reception of consonantal voicing contrasts from speech in pink noise than that achieved with conventional amplification and consequently, it also gave better overall performance in audio-visual consonant identification.(ABSTRACT TRUNCATED AT 400 WORDS)[Abstract] [Full Text] [Related] [New Search]