These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

120 related articles for article (PubMed ID: 39338607)

  • 1. AVaTER: Fusing Audio, Visual, and Textual Modalities Using Cross-Modal Attention for Emotion Recognition.
    Das A; Sarma MS; Hoque MM; Siddique N; Dewan MAA
    Sensors (Basel); 2024 Sep; 24(18):. PubMed ID: 39338607
    [TBL] [Abstract][Full Text] [Related]  

  • 2. BanglaSER: A speech emotion recognition dataset for the Bangla language.
    Das RK; Islam N; Ahmed MR; Islam S; Shatabda S; Islam AKMM
    Data Brief; 2022 Jun; 42():108091. PubMed ID: 35392615
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Drivers' Comprehensive Emotion Recognition Based on HAM.
    Zhou D; Cheng Y; Wen L; Luo H; Liu Y
    Sensors (Basel); 2023 Oct; 23(19):. PubMed ID: 37837124
    [TBL] [Abstract][Full Text] [Related]  

  • 4. KBES: A dataset for realistic Bangla speech emotion recognition with intensity level.
    Billah MM; Sarker ML; Akhand MAH
    Data Brief; 2023 Dec; 51():109741. PubMed ID: 37965597
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition.
    Wan X; Wang Y; Wang Z; Tang Y; Liu B
    Physiol Meas; 2024 Jul; 45(7):. PubMed ID: 38917842
    [No Abstract]   [Full Text] [Related]  

  • 6. Cross-modal credibility modelling for EEG-based multimodal emotion recognition.
    Zhang Y; Liu H; Wang D; Zhang D; Lou T; Zheng Q; Quek C
    J Neural Eng; 2024 Apr; 21(2):. PubMed ID: 38565099
    [No Abstract]   [Full Text] [Related]  

  • 7. A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions.
    Razzaq MA; Hussain J; Bang J; Hua CH; Satti FA; Rehman UU; Bilal HSM; Kim ST; Lee S
    Sensors (Basel); 2023 Apr; 23(9):. PubMed ID: 37177574
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multimodal interaction enhanced representation learning for video emotion recognition.
    Xia X; Zhao Y; Jiang D
    Front Neurosci; 2022; 16():1086380. PubMed ID: 36601594
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Multimodal Sensing for Depression Risk Detection: Integrating Audio, Video, and Text Data.
    Zhang Z; Zhang S; Ni D; Wei Z; Yang K; Jin S; Huang G; Liang Z; Zhang L; Li L; Ding H; Zhang Z; Wang J
    Sensors (Basel); 2024 Jun; 24(12):. PubMed ID: 38931497
    [TBL] [Abstract][Full Text] [Related]  

  • 10. EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts.
    Lee MH; Shomanov A; Begim B; Kabidenova Z; Nyssanbay A; Yazici A; Lee SW
    Sci Data; 2024 Sep; 11(1):1026. PubMed ID: 39300129
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Multimodal Emotion Recognition Based on Cascaded Multichannel and Hierarchical Fusion.
    Liu X; Xu Z; Huang K
    Comput Intell Neurosci; 2023; 2023():9645611. PubMed ID: 36643891
    [TBL] [Abstract][Full Text] [Related]  

  • 12. AttendAffectNet-Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention.
    Thao HTP; Balamurali BT; Roig G; Herremans D
    Sensors (Basel); 2021 Dec; 21(24):. PubMed ID: 34960450
    [TBL] [Abstract][Full Text] [Related]  

  • 13. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset.
    Cao H; Cooper DG; Keutmann MK; Gur RC; Nenkova A; Verma R
    IEEE Trans Affect Comput; 2014; 5(4):377-390. PubMed ID: 25653738
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Research on cross-modal emotion recognition based on multi-layer semantic fusion.
    Xu Z; Gao Y
    Math Biosci Eng; 2024 Jan; 21(2):2488-2514. PubMed ID: 38454693
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion.
    Xie B; Sidulova M; Park CH
    Sensors (Basel); 2021 Jul; 21(14):. PubMed ID: 34300651
    [TBL] [Abstract][Full Text] [Related]  

  • 16. AIA-Net: Adaptive Interactive Attention Network for Text-Audio Emotion Recognition.
    Zhang T; Li S; Chen B; Yuan H; Philip Chen CL
    IEEE Trans Cybern; 2023 Dec; 53(12):7659-7671. PubMed ID: 35994535
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Multi-Modal Residual Perceptron Network for Audio-Video Emotion Recognition.
    Chang X; Skarbek W
    Sensors (Basel); 2021 Aug; 21(16):. PubMed ID: 34450894
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Elder emotion classification through multimodal fusion of intermediate layers and cross-modal transfer learning.
    Sreevidya P; Veni S; Ramana Murthy OV
    Signal Image Video Process; 2022; 16(5):1281-1288. PubMed ID: 35069919
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Pedagogical sentiment analysis based on the BERT-CNN-BiGRU-attention model in the context of intercultural communication barriers.
    Bi X; Zhang T
    PeerJ Comput Sci; 2024; 10():e2166. PubMed ID: 38983236
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Talking Face Generation With Audio-Deduced Emotional Landmarks.
    Zhai S; Liu M; Li Y; Gao Z; Zhu L; Nie L
    IEEE Trans Neural Netw Learn Syst; 2024 Oct; 35(10):14099-14111. PubMed ID: 37216233
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.