These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
155 related articles for article (PubMed ID: 37895542)
1. Multi-Modal Representation via Contrastive Learning with Attention Bottleneck Fusion and Attentive Statistics Features. Guo Q; Liao Y; Li Z; Liang S Entropy (Basel); 2023 Oct; 25(10):. PubMed ID: 37895542 [TBL] [Abstract][Full Text] [Related]
2. Multimodal Sentiment Analysis Representations Learning via Contrastive Learning with Condense Attention Fusion. Wang H; Li X; Ren Z; Wang M; Ma C Sensors (Basel); 2023 Mar; 23(5):. PubMed ID: 36904883 [TBL] [Abstract][Full Text] [Related]
3. Hierarchical Fusion Network with Enhanced Knowledge and Contrastive Learning for Multimodal Aspect-Based Sentiment Analysis on Social Media. Hu X; Yamamura M Sensors (Basel); 2023 Aug; 23(17):. PubMed ID: 37687785 [TBL] [Abstract][Full Text] [Related]
4. Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective. Tao R; Zhu M; Cao H; Ren H Sensors (Basel); 2024 May; 24(10):. PubMed ID: 38793984 [TBL] [Abstract][Full Text] [Related]
5. SG-Fusion: A swin-transformer and graph convolution-based multi-modal deep neural network for glioma prognosis. Fu M; Fang M; Khan RA; Liao B; Hu Z; Wu FX Artif Intell Med; 2024 Nov; 157():102972. PubMed ID: 39232270 [TBL] [Abstract][Full Text] [Related]
6. A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations. Liu H; Zhuang Y; Song E; Xu X; Ma G; Cetinkaya C; Hung CC Med Phys; 2023 Sep; 50(9):5460-5478. PubMed ID: 36864700 [TBL] [Abstract][Full Text] [Related]
7. Cross-Modal Sentiment Sensing with Visual-Augmented Representation and Diverse Decision Fusion. Zhang S; Li B; Yin C Sensors (Basel); 2021 Dec; 22(1):. PubMed ID: 35009620 [TBL] [Abstract][Full Text] [Related]
8. Contrastive self-supervised representation learning without negative samples for multimodal human action recognition. Yang H; Ren Z; Yuan H; Xu Z; Zhou J Front Neurosci; 2023; 17():1225312. PubMed ID: 37476841 [TBL] [Abstract][Full Text] [Related]
9. Composite attention mechanism network for deep contrastive multi-view clustering. Du T; Zheng W; Xu X Neural Netw; 2024 Aug; 176():106361. PubMed ID: 38723307 [TBL] [Abstract][Full Text] [Related]
10. Hierarchical graph contrastive learning of local and global presentation for multimodal sentiment analysis. Du J; Jin J; Zhuang J; Zhang C Sci Rep; 2024 Mar; 14(1):5335. PubMed ID: 38438435 [TBL] [Abstract][Full Text] [Related]
11. Multimodal Sentiment Analysis Based on Cross-Modal Attention and Gated Cyclic Hierarchical Fusion Networks. Quan Z; Sun T; Su M; Wei J Comput Intell Neurosci; 2022; 2022():4767437. PubMed ID: 35983132 [TBL] [Abstract][Full Text] [Related]
12. Joint self-supervised and supervised contrastive learning for multimodal MRI data: Towards predicting abnormal neurodevelopment. Li Z; Li H; Ralescu AL; Dillman JR; Altaye M; Cecil KM; Parikh NA; He L Artif Intell Med; 2024 Nov; 157():102993. PubMed ID: 39369634 [TBL] [Abstract][Full Text] [Related]
13. Alignment-Enhanced Interactive Fusion Model for Complete and Incomplete Multimodal Hand Gesture Recognition. Duan S; Wu L; Liu A; Chen X IEEE Trans Neural Syst Rehabil Eng; 2023; 31():4661-4671. PubMed ID: 37983152 [TBL] [Abstract][Full Text] [Related]
14. TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis. Zhang Y; Xie F; Chen J Comput Biol Med; 2023 May; 157():106712. PubMed ID: 36907033 [TBL] [Abstract][Full Text] [Related]
15. Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer. Wang Q; Zhou B; Zhang J; Xie J; Wang Y Sensors (Basel); 2024 Jan; 24(3):. PubMed ID: 38339584 [TBL] [Abstract][Full Text] [Related]
16. Multimodal transformer augmented fusion for speech emotion recognition. Wang Y; Gu Y; Yin Y; Han Y; Zhang H; Wang S; Li C; Quan D Front Neurorobot; 2023; 17():1181598. PubMed ID: 37283784 [TBL] [Abstract][Full Text] [Related]
17. Multimodal interaction enhanced representation learning for video emotion recognition. Xia X; Zhao Y; Jiang D Front Neurosci; 2022; 16():1086380. PubMed ID: 36601594 [TBL] [Abstract][Full Text] [Related]
18. Attention-based multimodal fusion with contrast for robust clinical prediction in the face of missing modalities. Liu J; Capurro D; Nguyen A; Verspoor K J Biomed Inform; 2023 Sep; 145():104466. PubMed ID: 37549722 [TBL] [Abstract][Full Text] [Related]
19. COM: Contrastive Masked-attention model for incomplete multimodal learning. Qian S; Wang C Neural Netw; 2023 May; 162():443-455. PubMed ID: 36965274 [TBL] [Abstract][Full Text] [Related]
20. Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation. Chaitanya K; Erdil E; Karani N; Konukoglu E Med Image Anal; 2023 Jul; 87():102792. PubMed ID: 37054649 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]