These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 39093673)

  • 1. UNK-VQA: A Dataset and a Probe Into the Abstention Ability of Multi-Modal Large Models.
    Guo Y; Jiao F; Shen Z; Nie L; Kankanhalli M
    IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):10284-10296. PubMed ID: 39093673
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Interpretable medical image Visual Question Answering via multi-modal relationship graph learning.
    Hu X; Gu L; Kobayashi K; Liu L; Zhang M; Harada T; Summers RM; Zhu Y
    Med Image Anal; 2024 Oct; 97():103279. PubMed ID: 39079429
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool.
    Liu F; Xiang T; Hospedales TM; Yang W; Sun C
    IEEE Trans Pattern Anal Mach Intell; 2020 Feb; 42(2):460-474. PubMed ID: 30418897
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Advancing surgical VQA with scene graph knowledge.
    Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829
    [TBL] [Abstract][Full Text] [Related]  

  • 6. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 8. 3D Question Answering.
    Ye S; Chen D; Han S; Liao J
    IEEE Trans Vis Comput Graph; 2024 Mar; 30(3):1772-1786. PubMed ID: 36446015
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Unanswerable Questions About Images and Texts.
    Davis E
    Front Artif Intell; 2020; 3():51. PubMed ID: 33733168
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 12. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The multi-modal fusion in visual question answering: a review of attention mechanisms.
    Lu S; Liu M; Yin L; Yin Z; Liu X; Zheng W
    PeerJ Comput Sci; 2023; 9():e1400. PubMed ID: 37346665
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adversarial Learning With Multi-Modal Attention for Visual Question Answering.
    Liu Y; Zhang X; Huang F; Cheng L; Li Z
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):3894-3908. PubMed ID: 32833656
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answering.
    Li Y; Yang Q; Wang FL; Lee LK; Qu Y; Hao T
    Artif Intell Med; 2023 Oct; 144():102667. PubMed ID: 37783542
    [TBL] [Abstract][Full Text] [Related]  

  • 17. CRIC: A VQA Dataset for Compositional Reasoning on Vision and Commonsense.
    Gao D; Wang R; Shan S; Chen X
    IEEE Trans Pattern Anal Mach Intell; 2023 May; 45(5):5561-5578. PubMed ID: 36173773
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Comprehensive Visual Question Answering on Point Clouds through Compositional Scene Manipulation.
    Yan X; Yuan Z; Du Y; Liao Y; Guo Y; Cui S; Li Z
    IEEE Trans Vis Comput Graph; 2024 Dec; 30(12):7473-7485. PubMed ID: 38064324
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Robust visual question answering via polarity enhancement and contrast.
    Peng D; Li Z
    Neural Netw; 2024 Nov; 179():106560. PubMed ID: 39079376
    [TBL] [Abstract][Full Text] [Related]  

  • 20. BPI-MVQA: a bi-branch model for medical visual question answering.
    Liu S; Zhang X; Zhou X; Yang J
    BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.