These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Emotions in [a]: a perceptual and acoustic study. Author: Toivanen J, Waaramaa T, Alku P, Laukkanen AM, Seppänen T, Väyrynen E, Airas M. Journal: Logoped Phoniatr Vocol; 2006; 31(1):43-8. PubMed ID: 16517522. Abstract: The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples.[Abstract] [Full Text] [Related] [New Search]