These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Development of speechreading supplements based on automatic speech recognition. Author: Duchnowski P, Lum DS, Krause JC, Sexton MG, Bratakos MS, Braida LD. Journal: IEEE Trans Biomed Eng; 2000 Apr; 47(4):487-96. PubMed ID: 10763294. Abstract: In manual-cued speech (MCS) a speaker produces hand gestures to resolve ambiguities among speech elements that are often confused by speechreaders. The shape of the hand distinguishes among consonants; the position of the hand relative to the face distinguishes among vowels. Experienced receivers of MCS achieve nearly perfect reception of everyday connected speech. MCS has been taught to very young deaf children and greatly facilitates language learning, communication, and general education. This manuscript describes a system that can produce a form of cued speech automatically in real time and reports on its evaluation by trained receivers of MCS. Cues are derived by a hidden markov models (HMM)-based speaker-dependent phonetic speech recognizer that uses context-dependent phone models and are presented visually by superimposing animated handshapes on the face of the talker. The benefit provided by these cues strongly depends on articulation of hand movements and on precise synchronization of the actions of the hands and the face. Using the system reported here, experienced cue receivers can recognize roughly two-thirds of the keywords in cued low-context sentences correctly, compared to roughly one-third by speechreading alone (SA). The practical significance of these improvements is to support fairly normal rates of reception of conversational speech, a task that is often difficult via SA.[Abstract] [Full Text] [Related] [New Search]