These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
209 related articles for article (PubMed ID: 29993847)
1. Beyond Bilinear: Generalized Multimodal Factorized High-Order Pooling for Visual Question Answering. Yu Z; Yu J; Xiang C; Fan J; Tao D IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):5947-5959. PubMed ID: 29993847 [TBL] [Abstract][Full Text] [Related]
2. Visual question answering model for fruit tree disease decision-making based on multimodal deep learning. Lan Y; Guo Y; Chen Q; Lin S; Chen Y; Deng X Front Plant Sci; 2022; 13():1064399. PubMed ID: 36684756 [TBL] [Abstract][Full Text] [Related]
3. Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels. Winterbottom T; Xiao S; McLean A; Al Moubayed N PeerJ Comput Sci; 2022; 8():e974. PubMed ID: 35721409 [TBL] [Abstract][Full Text] [Related]
5. A Bi-level representation learning model for medical visual question answering. Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063 [TBL] [Abstract][Full Text] [Related]
6. Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool. Liu F; Xiang T; Hospedales TM; Yang W; Sun C IEEE Trans Pattern Anal Mach Intell; 2020 Feb; 42(2):460-474. PubMed ID: 30418897 [TBL] [Abstract][Full Text] [Related]
7. An Effective Dense Co-Attention Networks for Visual Question Answering. He S; Han D Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872620 [TBL] [Abstract][Full Text] [Related]
8. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering. Guo Z; Han D Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994 [TBL] [Abstract][Full Text] [Related]
9. Multitask Learning for Visual Question Answering. Ma J; Liu J; Lin Q; Wu B; Wang Y; You Y IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1380-1394. PubMed ID: 34460390 [TBL] [Abstract][Full Text] [Related]
10. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network. Peng L; Yang Y; Wang Z; Huang Z; Shen HT IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794 [TBL] [Abstract][Full Text] [Related]
11. Dual modality prompt learning for visual question-grounded answering in robotic surgery. Zhang Y; Fan W; Peng P; Yang X; Zhou D; Wei X Vis Comput Ind Biomed Art; 2024 Apr; 7(1):9. PubMed ID: 38647624 [TBL] [Abstract][Full Text] [Related]
12. Robust visual question answering via polarity enhancement and contrast. Peng D; Li Z Neural Netw; 2024 Nov; 179():106560. PubMed ID: 39079376 [TBL] [Abstract][Full Text] [Related]
13. Parallel multi-head attention and term-weighted question embedding for medical visual question answering. Manmadhan S; Kovoor BC Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667 [TBL] [Abstract][Full Text] [Related]
14. BPI-MVQA: a bi-branch model for medical visual question answering. Liu S; Zhang X; Zhou X; Yang J BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285 [TBL] [Abstract][Full Text] [Related]
15. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding. Cao Q; Li B; Liang X; Wang K; Lin L IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313 [TBL] [Abstract][Full Text] [Related]
16. Medical visual question answering based on question-type reasoning and semantic space constraint. Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340 [TBL] [Abstract][Full Text] [Related]
17. Collaborative Modality Fusion for Mitigating Language Bias in Visual Question Answering. Lu Q; Chen S; Zhu X J Imaging; 2024 Feb; 10(3):. PubMed ID: 38535137 [TBL] [Abstract][Full Text] [Related]
18. Reducing Vision-Answer Biases for Multiple-Choice VQA. Zhang X; Zhang F; Xu C IEEE Trans Image Process; 2023; 32():4621-4634. PubMed ID: 37556338 [TBL] [Abstract][Full Text] [Related]
19. Structured Multimodal Attentions for TextVQA. Gao C; Zhu Q; Wang P; Li H; Liu Y; Hengel AVD; Wu Q IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9603-9614. PubMed ID: 34855584 [TBL] [Abstract][Full Text] [Related]
20. Deep Modular Bilinear Attention Network for Visual Question Answering. Yan F; Silamu W; Li Y Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]