These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
119 related articles for article (PubMed ID: 38530725)
1. Latent Attention Network With Position Perception for Visual Question Answering. Zhang J; Liu X; Wang Z IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38530725 [TBL] [Abstract][Full Text] [Related]
2. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network. Peng L; Yang Y; Wang Z; Huang Z; Shen HT IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794 [TBL] [Abstract][Full Text] [Related]
3. Rich Visual Knowledge-Based Augmentation Network for Visual Question Answering. Zhang L; Liu S; Liu D; Zeng P; Li X; Song J; Gao L IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4362-4373. PubMed ID: 32941156 [TBL] [Abstract][Full Text] [Related]
4. Robust visual question answering via polarity enhancement and contrast. Peng D; Li Z Neural Netw; 2024 Nov; 179():106560. PubMed ID: 39079376 [TBL] [Abstract][Full Text] [Related]
5. Medical visual question answering based on question-type reasoning and semantic space constraint. Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340 [TBL] [Abstract][Full Text] [Related]
6. An effective spatial relational reasoning networks for visual question answering. Shen X; Han D; Chen C; Luo G; Wu Z PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742 [TBL] [Abstract][Full Text] [Related]
7. Weakly-Supervised 3D Spatial Reasoning for Text-Based Visual Question Answering. Li H; Huang J; Jin P; Song G; Wu Q; Chen J IEEE Trans Image Process; 2023; 32():3367-3382. PubMed ID: 37256804 [TBL] [Abstract][Full Text] [Related]
8. BPI-MVQA: a bi-branch model for medical visual question answering. Liu S; Zhang X; Zhou X; Yang J BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285 [TBL] [Abstract][Full Text] [Related]
9. Visual question answering based on local-scene-aware referring expression generation. Kim JJ; Lee DG; Wu J; Jung HG; Lee SW Neural Netw; 2021 Jul; 139():158-167. PubMed ID: 33714005 [TBL] [Abstract][Full Text] [Related]
10. An Effective Dense Co-Attention Networks for Visual Question Answering. He S; Han D Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872620 [TBL] [Abstract][Full Text] [Related]
11. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering. Guo Z; Han D Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994 [TBL] [Abstract][Full Text] [Related]
12. Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answering. Li Y; Yang Q; Wang FL; Lee LK; Qu Y; Hao T Artif Intell Med; 2023 Oct; 144():102667. PubMed ID: 37783542 [TBL] [Abstract][Full Text] [Related]
13. Parallel multi-head attention and term-weighted question embedding for medical visual question answering. Manmadhan S; Kovoor BC Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667 [TBL] [Abstract][Full Text] [Related]
14. Interpretable Visual Question Answering by Reasoning on Dependency Trees. Cao Q; Liang X; Li B; Lin L IEEE Trans Pattern Anal Mach Intell; 2021 Mar; 43(3):887-901. PubMed ID: 31562071 [TBL] [Abstract][Full Text] [Related]
15. Advancing surgical VQA with scene graph knowledge. Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829 [TBL] [Abstract][Full Text] [Related]
16. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering. Cao J; Qin X; Zhao S; Shen J IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171 [TBL] [Abstract][Full Text] [Related]
17. Medical visual question answering via corresponding feature fusion combined with semantic attention. Zhu H; He X; Wang M; Zhang M; Qing L Math Biosci Eng; 2022 Jul; 19(10):10192-10212. PubMed ID: 36031991 [TBL] [Abstract][Full Text] [Related]
18. Counterfactual Samples Synthesizing and Training for Robust Visual Question Answering. Chen L; Zheng Y; Niu Y; Zhang H; Xiao J IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13218-13234. PubMed ID: 37368813 [TBL] [Abstract][Full Text] [Related]
19. Depth and Video Segmentation Based Visual Attention for Embodied Question Answering. Luo H; Lin G; Yao Y; Liu F; Liu Z; Tang Z IEEE Trans Pattern Anal Mach Intell; 2023 Jun; 45(6):6807-6819. PubMed ID: 34982673 [TBL] [Abstract][Full Text] [Related]
20. Multitask Learning for Visual Question Answering. Ma J; Liu J; Lin Q; Wu B; Wang Y; You Y IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1380-1394. PubMed ID: 34460390 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]