These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

269 related articles for article (PubMed ID: 36601594)

  • 21. A multimodal dynamical variational autoencoder for audiovisual speech representation learning.
    Sadok S; Leglaive S; Girin L; Alameda-Pineda X; Séguier R
    Neural Netw; 2024 Apr; 172():106120. PubMed ID: 38266474
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion.
    Xie B; Sidulova M; Park CH
    Sensors (Basel); 2021 Jul; 21(14):. PubMed ID: 34300651
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective.
    Tao R; Zhu M; Cao H; Ren H
    Sensors (Basel); 2024 May; 24(10):. PubMed ID: 38793984
    [TBL] [Abstract][Full Text] [Related]  

  • 24. A novel transformer autoencoder for multi-modal emotion recognition with incomplete data.
    Cheng C; Liu W; Fan Z; Feng L; Jia Z
    Neural Netw; 2024 Apr; 172():106111. PubMed ID: 38237444
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions.
    Razzaq MA; Hussain J; Bang J; Hua CH; Satti FA; Rehman UU; Bilal HSM; Kim ST; Lee S
    Sensors (Basel); 2023 Apr; 23(9):. PubMed ID: 37177574
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Towards an intelligent framework for multimodal affective data analysis.
    Poria S; Cambria E; Hussain A; Huang GB
    Neural Netw; 2015 Mar; 63():104-16. PubMed ID: 25523041
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Multi-Modal Adaptive Fusion Transformer Network for the Estimation of Depression Level.
    Sun H; Liu J; Chai S; Qiu Z; Lin L; Huang X; Chen Y
    Sensors (Basel); 2021 Jul; 21(14):. PubMed ID: 34300504
    [TBL] [Abstract][Full Text] [Related]  

  • 28. AudioVisual Video Summarization.
    Zhao B; Gong M; Li X
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):5181-5188. PubMed ID: 34695009
    [TBL] [Abstract][Full Text] [Related]  

  • 29. CDGT: Constructing diverse graph transformers for emotion recognition from facial videos.
    Chen D; Wen G; Li H; Yang P; Chen C; Wang B
    Neural Netw; 2024 Nov; 179():106573. PubMed ID: 39096753
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Emotion Recognition from Large-Scale Video Clips with Cross-Attention and Hybrid Feature Weighting Neural Networks.
    Zhou S; Wu X; Jiang F; Huang Q; Huang C
    Int J Environ Res Public Health; 2023 Jan; 20(2):. PubMed ID: 36674161
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Integrating audio and visual modalities for multimodal personality trait recognition
    Zhao X; Liao Y; Tang Z; Xu Y; Tao X; Wang D; Wang G; Lu H
    Front Neurosci; 2022; 16():1107284. PubMed ID: 36685221
    [TBL] [Abstract][Full Text] [Related]  

  • 32. DEEP MULTIMODAL LEARNING FOR EMOTION RECOGNITION IN SPOKEN LANGUAGE.
    Gu Y; Chen S; Marsic I
    Proc IEEE Int Conf Acoust Speech Signal Process; 2018 Apr; 2018():5079-5083. PubMed ID: 30505240
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Display-Semantic Transformer for Scene Text Recognition.
    Yang X; Silamu W; Xu M; Li Y
    Sensors (Basel); 2023 Sep; 23(19):. PubMed ID: 37836989
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image-Text Retrieval.
    Peng SJ; He Y; Liu X; Cheung YM; Xu X; Cui Z
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2194-2207. PubMed ID: 35830398
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Multi-Modal Representation via Contrastive Learning with Attention Bottleneck Fusion and Attentive Statistics Features.
    Guo Q; Liao Y; Li Z; Liang S
    Entropy (Basel); 2023 Oct; 25(10):. PubMed ID: 37895542
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Joint low-rank tensor fusion and cross-modal attention for multimodal physiological signals based emotion recognition.
    Wan X; Wang Y; Wang Z; Tang Y; Liu B
    Physiol Meas; 2024 Jul; 45(7):. PubMed ID: 38917842
    [No Abstract]   [Full Text] [Related]  

  • 37. Cross-Attentional Spatio-Temporal Semantic Graph Networks for Video Question Answering.
    Liu Y; Zhang X; Huang F; Zhang B; Li Z
    IEEE Trans Image Process; 2022; 31():1684-1696. PubMed ID: 35044914
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Semantics-Aware Spatial-Temporal Binaries for Cross-Modal Video Retrieval.
    Qi M; Qin J; Yang Y; Wang Y; Luo J
    IEEE Trans Image Process; 2021; 30():2989-3004. PubMed ID: 33560984
    [TBL] [Abstract][Full Text] [Related]  

  • 39. A multimodal convolutional neuro-fuzzy network for emotion understanding of movie clips.
    Nguyen TL; Kavuri S; Lee M
    Neural Netw; 2019 Oct; 118():208-219. PubMed ID: 31299625
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Human-Object Interaction detection via Global Context and Pairwise-level Fusion Features Integration.
    Wang H; Yu H; Zhang Q
    Neural Netw; 2024 Feb; 170():242-253. PubMed ID: 37995546
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 14.