These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 37765933)

  • 1. Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System.
    Heo Y; Kang S; Seo J
    Sensors (Basel); 2023 Sep; 23(18):. PubMed ID: 37765933
    [TBL] [Abstract][Full Text] [Related]  

  • 2. DER-GCN: Dialog and Event Relation-Aware Graph Convolutional Neural Network for Multimodal Dialog Emotion Recognition.
    Ai W; Shou Y; Meng T; Li K
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38437139
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Discriminative Cross-Modality Attention Network for Temporal Inconsistent Audio-Visual Event Localization.
    Xuan H; Luo L; Zhang Z; Yang J; Yan Y
    IEEE Trans Image Process; 2021; 30():7878-7888. PubMed ID: 34478364
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Integrating audio and visual modalities for multimodal personality trait recognition
    Zhao X; Liao Y; Tang Z; Xu Y; Tao X; Wang D; Wang G; Lu H
    Front Neurosci; 2022; 16():1107284. PubMed ID: 36685221
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild.
    He Y; Seng KP; Ang LM
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850432
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multitask Learning and Reinforcement Learning for Personalized Dialog Generation: An Empirical Study.
    Yang M; Huang W; Tu W; Qu Q; Shen Y; Lei K
    IEEE Trans Neural Netw Learn Syst; 2021 Jan; 32(1):49-62. PubMed ID: 32149657
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Multimodal Saliency Model for Videos with High Audio-Visual Correspondence.
    Min X; Zhai G; Zhou J; Zhang XP; Yang X; Guan X
    IEEE Trans Image Process; 2020 Jan; ():. PubMed ID: 31976898
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multimodal interaction enhanced representation learning for video emotion recognition.
    Xia X; Zhao Y; Jiang D
    Front Neurosci; 2022; 16():1086380. PubMed ID: 36601594
    [TBL] [Abstract][Full Text] [Related]  

  • 9. End-to-end multimodal clinical depression recognition using deep neural networks: A comparative analysis.
    Muzammel M; Salam H; Othmani A
    Comput Methods Programs Biomed; 2021 Nov; 211():106433. PubMed ID: 34614452
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Augmented Robotics Dialog System for Enhancing Human-Robot Interaction.
    Alonso-Martín F; Castro-González A; Luengo FJ; Salichs MÁ
    Sensors (Basel); 2015 Jul; 15(7):15799-829. PubMed ID: 26151202
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art.
    Lu S; Wang P
    Front Neurorobot; 2023; 17():1281944. PubMed ID: 37841080
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.
    Meyerhoff HS; Huff M
    Mem Cognit; 2016 Apr; 44(3):390-402. PubMed ID: 26620810
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Effect of task-related continuous auditory feedback during learning of tracking motion exercises.
    Rosati G; Oscari F; Spagnol S; Avanzini F; Masiero S
    J Neuroeng Rehabil; 2012 Oct; 9():79. PubMed ID: 23046683
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Unsupervised Modality-Transferable Video Highlight Detection With Representation Activation Sequence Learning.
    Li T; Sun Z; Xiao X
    IEEE Trans Image Process; 2024; 33():1911-1922. PubMed ID: 38451754
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Relational Temporal Graph Reasoning for Dual-Task Dialogue Language Understanding.
    Xing B; Tsang IW
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13170-13184. PubMed ID: 37363836
    [TBL] [Abstract][Full Text] [Related]  

  • 16. AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model.
    Mingyu J; Jiawei Z; Ning W
    PLoS One; 2022; 17(9):e0273936. PubMed ID: 36084041
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Talk-to-Edit: Fine-Grained 2D and 3D Facial Editing via Dialog.
    Jiang Y; Huang Z; Wu T; Pan X; Loy CC; Liu Z
    IEEE Trans Pattern Anal Mach Intell; 2024 May; 46(5):3692-3706. PubMed ID: 38147423
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Lightweight dense video captioning with cross-modal attention and knowledge-enhanced unbiased scene graph.
    Han S; Liu J; Zhang J; Gong P; Zhang X; He H
    Complex Intell Systems; 2023 Feb; ():1-18. PubMed ID: 36855683
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Modulation of scene consistency and task demand on language-driven eye movements for audio-visual integration.
    Yu WY; Tsai JL
    Acta Psychol (Amst); 2016 Nov; 171():1-16. PubMed ID: 27640139
    [TBL] [Abstract][Full Text] [Related]  

  • 20. More to diverse: Generating diversified responses in a task oriented multimodal dialog system.
    Firdaus M; Pratap Shandeelya A; Ekbal A
    PLoS One; 2020; 15(11):e0241271. PubMed ID: 33151948
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.