These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
306 related articles for article (PubMed ID: 29768426)
1. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. Livingstone SR; Russo FA PLoS One; 2018; 13(5):e0196391. PubMed ID: 29768426 [TBL] [Abstract][Full Text] [Related]
2. Detection of Emotion of Speech for RAVDESS Audio Using Hybrid Convolution Neural Network. Puri T; Soni M; Dhiman G; Ibrahim Khalaf O; Alazzam M; Raza Khan I J Healthc Eng; 2022; 2022():8472947. PubMed ID: 35265307 [TBL] [Abstract][Full Text] [Related]
3. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities. von Eiff CI; Kauk J; Schweinberger SR Behav Res Methods; 2024 Aug; 56(5):5103-5115. PubMed ID: 37821750 [TBL] [Abstract][Full Text] [Related]
4. Human-Computer Interaction for Recognizing Speech Emotions Using Multilayer Perceptron Classifier. Alnuaim AA; Zakariah M; Shukla PK; Alhadlaq A; Hatamleh WA; Tarazi H; Sureshbabu R; Ratna R J Healthc Eng; 2022; 2022():6005446. PubMed ID: 35388315 [TBL] [Abstract][Full Text] [Related]
6. Vienna Talking Faces (ViTaFa): A multimodal person database with synchronized videos, images, and voices. Krumpholz C; Quigley C; Fusani L; Leder H Behav Res Methods; 2024 Apr; 56(4):2923-2940. PubMed ID: 37950115 [TBL] [Abstract][Full Text] [Related]
7. Common cues to emotion in the dynamic facial expressions of speech and song. Livingstone SR; Thompson WF; Wanderley MM; Palmer C Q J Exp Psychol (Hove); 2015; 68(5):952-70. PubMed ID: 25424388 [TBL] [Abstract][Full Text] [Related]
8. Infant discrimination of naturalistic emotional expressions: the role of face and voice. Caron AJ; Caron RF; MacLean DJ Child Dev; 1988 Jun; 59(3):604-16. PubMed ID: 3383670 [TBL] [Abstract][Full Text] [Related]
9. Differences of people with visual disabilities in the perceived intensity of emotion inferred from speech of sighted people in online communication settings. Kim HN; Taylor S Disabil Rehabil Assist Technol; 2024 Apr; 19(3):633-640. PubMed ID: 35997772 [TBL] [Abstract][Full Text] [Related]
10. The Dysarthric Expressed Emotional Database (DEED): An audio-visual database in British English. Alhinti L; Cunningham S; Christensen H PLoS One; 2023; 18(8):e0287971. PubMed ID: 37549162 [TBL] [Abstract][Full Text] [Related]
11. Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database. Calvo MG; Fernández-Martín A; Gutiérrez-García A; Lundqvist D Sci Rep; 2018 Nov; 8(1):17039. PubMed ID: 30451919 [TBL] [Abstract][Full Text] [Related]
12. Recognizing vocal emotions in Mandarin Chinese: a validated database of Chinese vocal emotional stimuli. Liu P; Pell MD Behav Res Methods; 2012 Dec; 44(4):1042-51. PubMed ID: 22539230 [TBL] [Abstract][Full Text] [Related]
13. Evidence for shared deficits in identifying emotions from faces and from voices in autism spectrum disorders and specific language impairment. Taylor LJ; Maybery MT; Grayndler L; Whitehouse AJ Int J Lang Commun Disord; 2015 Jul; 50(4):452-66. PubMed ID: 25588870 [TBL] [Abstract][Full Text] [Related]
14. SUST Bangla Emotional Speech Corpus (SUBESCO): An audio-only emotional speech corpus for Bangla. Sultana S; Rahman MS; Selim MR; Iqbal MZ PLoS One; 2021; 16(4):e0250173. PubMed ID: 33930026 [TBL] [Abstract][Full Text] [Related]
15. The Hoosier Vocal Emotions Corpus: A validated set of North American English pseudo-words for evaluating emotion processing. Darcy I; Fontaine NMG Behav Res Methods; 2020 Apr; 52(2):901-917. PubMed ID: 31485866 [TBL] [Abstract][Full Text] [Related]
16. Can you hear what I feel? A validated prosodic set of angry, happy, and neutral Italian pseudowords. Preti E; Suttora C; Richetin J Behav Res Methods; 2016 Mar; 48(1):259-71. PubMed ID: 25701108 [TBL] [Abstract][Full Text] [Related]
17. The role of the age and gender, and the complexity of the syntactic unit in the perception of affective emotions in voice. Trinite B; Zdanovica A; Kurme D; Lavrane E; Magazeina I; Jansone A Codas; 2024; 36(5):e20240009. PubMed ID: 39046026 [TBL] [Abstract][Full Text] [Related]
18. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD). Keshtiari N; Kuhlmann M; Eslami M; Klann-Delius G Behav Res Methods; 2015 Mar; 47(1):275-94. PubMed ID: 24853832 [TBL] [Abstract][Full Text] [Related]
19. Effects of dynamic information in recognising facial expressions on dimensional and categorical judgments. Fujimura T; Suzuki N Perception; 2010; 39(4):543-52. PubMed ID: 20515001 [TBL] [Abstract][Full Text] [Related]
20. A Cantonese Audio-Visual Emotional Speech (CAVES) dataset. Chong CS; Davis C; Kim J Behav Res Methods; 2024 Aug; 56(5):5264-5278. PubMed ID: 38017201 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]