These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

176 related articles for article (PubMed ID: 37950115)

  • 1. Vienna Talking Faces (ViTaFa): A multimodal person database with synchronized videos, images, and voices.
    Krumpholz C; Quigley C; Fusani L; Leder H
    Behav Res Methods; 2024 Apr; 56(4):2923-2940. PubMed ID: 37950115
    [TBL] [Abstract][Full Text] [Related]  

  • 2. The role of emotion in dynamic audiovisual integration of faces and voices.
    Kokinous J; Kotz SA; Tavano A; Schröger E
    Soc Cogn Affect Neurosci; 2015 May; 10(5):713-20. PubMed ID: 25147273
    [TBL] [Abstract][Full Text] [Related]  

  • 3. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities.
    von Eiff CI; Kauk J; Schweinberger SR
    Behav Res Methods; 2023 Oct; ():. PubMed ID: 37821750
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Event-Related Potentials Reveal Evidence for Late Integration of Emotional Prosody and Facial Expression in Dynamic Stimuli: An ERP Study.
    Föcker J; Röder B
    Multisens Res; 2019 Jan; 32(6):473-497. PubMed ID: 31085752
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The temporal dynamics of processing emotions from vocal, facial, and bodily expressions.
    Jessen S; Kotz SA
    Neuroimage; 2011 Sep; 58(2):665-74. PubMed ID: 21718792
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.
    Palama A; Malsert J; Gentaz E
    PLoS One; 2018; 13(4):e0194579. PubMed ID: 29641530
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.
    Kokinous J; Tavano A; Kotz SA; Schröger E
    Biol Psychol; 2017 Feb; 123():155-165. PubMed ID: 27979653
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.
    Watson R; Latinus M; Noguchi T; Garrod O; Crabbe F; Belin P
    J Neurosci; 2014 May; 34(20):6813-21. PubMed ID: 24828635
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Anxiety biases audiovisual processing of social signals.
    Heffer N; Karl A; Jicol C; Ashwin C; Petrini K
    Behav Brain Res; 2021 Jul; 410():113346. PubMed ID: 33964354
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.
    Kreifelts B; Ethofer T; Grodd W; Erb M; Wildgruber D
    Neuroimage; 2007 Oct; 37(4):1445-56. PubMed ID: 17659885
    [TBL] [Abstract][Full Text] [Related]  

  • 11. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.
    Livingstone SR; Russo FA
    PLoS One; 2018; 13(5):e0196391. PubMed ID: 29768426
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Adaptation aftereffects in vocal emotion perception elicited by expressive faces and voices.
    Skuk VG; Schweinberger SR
    PLoS One; 2013; 8(11):e81691. PubMed ID: 24236215
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The Geneva Faces and Voices (GEFAV) database.
    Ferdenzi C; Delplanque S; Mehu-Blantar I; Da Paz Cabral KM; Domingos Felicio M; Sander D
    Behav Res Methods; 2015 Dec; 47(4):1110-1121. PubMed ID: 25511208
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The many faces of a face: Comparing stills and videos of facial expressions in eight dimensions (SAVE database).
    Garrido MV; Lopes D; Prada M; Rodrigues D; Jerónimo R; Mourão RP
    Behav Res Methods; 2017 Aug; 49(4):1343-1360. PubMed ID: 27573005
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Emotion identification across adulthood using the Dynamic FACES database of emotional expressions in younger, middle aged, and older adults.
    Holland CAC; Ebner NC; Lin T; Samanez-Larkin GR
    Cogn Emot; 2019 Mar; 33(2):245-257. PubMed ID: 29595363
    [TBL] [Abstract][Full Text] [Related]  

  • 16. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions.
    Şentürk YD; Tavacioglu EE; Duymaz İ; Sayim B; Alp N
    Behav Res Methods; 2023 Sep; 55(6):3078-3099. PubMed ID: 36018484
    [TBL] [Abstract][Full Text] [Related]  

  • 17. The N400 and late occipital positivity in processing dynamic facial expressions with natural emotional voice.
    Mori K; Tanaka A; Kawabata H; Arao H
    Neuroreport; 2021 Jul; 32(10):858-863. PubMed ID: 34029292
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Recalibration of vocal affect by a dynamic face.
    Baart M; Vroomen J
    Exp Brain Res; 2018 Jul; 236(7):1911-1918. PubMed ID: 29696314
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The Complex Emotion Expression Database: A validated stimulus set of trained actors.
    Benda MS; Scherf KS
    PLoS One; 2020; 15(2):e0228248. PubMed ID: 32012179
    [TBL] [Abstract][Full Text] [Related]  

  • 20. High trait anxiety enhances optimal integration of auditory and visual threat cues.
    Heffer N; Gradidge M; Karl A; Ashwin C; Petrini K
    J Behav Ther Exp Psychiatry; 2022 Mar; 74():101693. PubMed ID: 34563795
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.