These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

135 related articles for article (PubMed ID: 23935410)

  • 21. Emotions in [a]: a perceptual and acoustic study.
    Toivanen J; Waaramaa T; Alku P; Laukkanen AM; Seppänen T; Väyrynen E; Airas M
    Logoped Phoniatr Vocol; 2006; 31(1):43-8. PubMed ID: 16517522
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Emotion in speech: the acoustic attributes of fear, anger, sadness, and joy.
    Sobin C; Alpert M
    J Psycholinguist Res; 1999 Jul; 28(4):347-65. PubMed ID: 10380660
    [TBL] [Abstract][Full Text] [Related]  

  • 23. A comprehensive study on bilingual and multilingual speech emotion recognition using a two-pass classification scheme.
    Heracleous P; Yoneyama A
    PLoS One; 2019; 14(8):e0220386. PubMed ID: 31415592
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Speech Emotion Recognition Using Attention Model.
    Singh J; Saheer LB; Faust O
    Int J Environ Res Public Health; 2023 Mar; 20(6):. PubMed ID: 36982048
    [TBL] [Abstract][Full Text] [Related]  

  • 25. How aging affects the recognition of emotional speech.
    Paulmann S; Pell MD; Kotz SA
    Brain Lang; 2008 Mar; 104(3):262-9. PubMed ID: 17428529
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Encoding emotions in speech with the size code. A perceptual investigation.
    Chuenwattanapranithi S; Xu Y; Thipakorn B; Maneewongvatana S
    Phonetica; 2008; 65(4):210-30. PubMed ID: 19221452
    [TBL] [Abstract][Full Text] [Related]  

  • 27. The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features.
    Laukka P; Elfenbein HA; Thingujam NS; Rockstuhl T; Iraki FK; Chui W; Althoff J
    J Pers Soc Psychol; 2016 Nov; 111(5):686-705. PubMed ID: 27537275
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD).
    Keshtiari N; Kuhlmann M; Eslami M; Klann-Delius G
    Behav Res Methods; 2015 Mar; 47(1):275-94. PubMed ID: 24853832
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Estimation of phoneme-specific HMM topologies for the automatic recognition of dysarthric speech.
    Caballero-Morales SO
    Comput Math Methods Med; 2013; 2013():297860. PubMed ID: 24222784
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Crossmodal and incremental perception of audiovisual cues to emotional speech.
    Barkhuysen P; Krahmer E; Swerts M
    Lang Speech; 2010; 53(Pt 1):3-30. PubMed ID: 20415000
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Research on Chinese Speech Emotion Recognition Based on Deep Neural Network and Acoustic Features.
    Lee MC; Yeh SC; Chang JW; Chen ZY
    Sensors (Basel); 2022 Jun; 22(13):. PubMed ID: 35808238
    [TBL] [Abstract][Full Text] [Related]  

  • 32. A Deep Learning Method Using Gender-Specific Features for Emotion Recognition.
    Zhang LM; Li Y; Zhang YT; Ng GW; Leau YB; Yan H
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772395
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.
    Waaramaa T; Palo P; Kankare E
    Logoped Phoniatr Vocol; 2015 Dec; 40(4):156-70. PubMed ID: 24998780
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Multi-Input Speech Emotion Recognition Model Using Mel Spectrogram and GeMAPS.
    Toyoshima I; Okada Y; Ishimaru M; Uchiyama R; Tada M
    Sensors (Basel); 2023 Feb; 23(3):. PubMed ID: 36772782
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Consonant and vowel articulation accuracy in younger and middle-aged Spanish healthy adults.
    Moreno-Torres I; Nava E
    PLoS One; 2020; 15(11):e0242018. PubMed ID: 33166341
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Speaking to the trained ear: musical expertise enhances the recognition of emotions in speech prosody.
    Lima CF; Castro SL
    Emotion; 2011 Oct; 11(5):1021-31. PubMed ID: 21942696
    [TBL] [Abstract][Full Text] [Related]  

  • 37. How Do We Recognize Emotion From Movement? Specific Motor Components Contribute to the Recognition of Each Emotion.
    Melzer A; Shafir T; Tsachor RP
    Front Psychol; 2019; 10():1389. PubMed ID: 31333524
    [TBL] [Abstract][Full Text] [Related]  

  • 38. The Emotion Probe: On the Universality of Cross-Linguistic and Cross-Gender Speech Emotion Recognition via Machine Learning.
    Costantini G; Parada-Cabaleiro E; Casali D; Cesarini V
    Sensors (Basel); 2022 Mar; 22(7):. PubMed ID: 35408076
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Detection of Emotion of Speech for RAVDESS Audio Using Hybrid Convolution Neural Network.
    Puri T; Soni M; Dhiman G; Ibrahim Khalaf O; Alazzam M; Raza Khan I
    J Healthc Eng; 2022; 2022():8472947. PubMed ID: 35265307
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users.
    Barrett KC; Chatterjee M; Caldwell MT; Deroche MLD; Jiradejvong P; Kulkarni AM; Limb CJ
    Ear Hear; 2020; 41(5):1372-1382. PubMed ID: 32149924
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.