These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

198 related articles for article (PubMed ID: 30418897)

  • 21. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 22. BPI-MVQA: a bi-branch model for medical visual question answering.
    Liu S; Zhang X; Zhou X; Yang J
    BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Radial Graph Convolutional Network for Visual Question Generation.
    Xu X; Wang T; Yang Y; Hanjalic A; Shen HT
    IEEE Trans Neural Netw Learn Syst; 2021 Apr; 32(4):1654-1667. PubMed ID: 32340964
    [TBL] [Abstract][Full Text] [Related]  

  • 24. FVQA: Fact-based Visual Question Answering.
    Wang P; Wu Q; Shen C; Dick A; Hengel AVD
    IEEE Trans Pattern Anal Mach Intell; 2018 Oct; 40(10):2413-2427. PubMed ID: 28945588
    [TBL] [Abstract][Full Text] [Related]  

  • 25. NExT-OOD: Overcoming Dual Multiple-Choice VQA Biases.
    Zhang X; Zhang F; Xu C
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; 46(4):1913-1931. PubMed ID: 37093718
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Re-Attention for Visual Question Answering.
    Guo W; Zhang Y; Yang J; Yuan X
    IEEE Trans Image Process; 2021; 30():6730-6743. PubMed ID: 34283714
    [TBL] [Abstract][Full Text] [Related]  

  • 27. UNK-VQA: A Dataset and a Probe Into the Abstention Ability of Multi-Modal Large Models.
    Guo Y; Jiao F; Shen Z; Nie L; Kankanhalli M
    IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):10284-10296. PubMed ID: 39093673
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Counterfactual Samples Synthesizing and Training for Robust Visual Question Answering.
    Chen L; Zheng Y; Niu Y; Zhang H; Xiao J
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13218-13234. PubMed ID: 37368813
    [TBL] [Abstract][Full Text] [Related]  

  • 29. 3D Question Answering.
    Ye S; Chen D; Han S; Liao J
    IEEE Trans Vis Comput Graph; 2024 Mar; 30(3):1772-1786. PubMed ID: 36446015
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Medical Visual Question Answering via Conditional Reasoning and Contrastive Learning.
    Liu B; Zhan LM; Xu L; Wu XM
    IEEE Trans Med Imaging; 2023 May; 42(5):1532-1545. PubMed ID: 37015503
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA.
    Vosoughi A; Deng S; Zhang S; Tian Y; Xu C; Luo J
    IEEE Trans Multimedia; 2024; 26():8609-8624. PubMed ID: 39429951
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Deep Modular Bilinear Attention Network for Visual Question Answering.
    Yan F; Silamu W; Li Y
    Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Reducing Vision-Answer Biases for Multiple-Choice VQA.
    Zhang X; Zhang F; Xu C
    IEEE Trans Image Process; 2023; 32():4621-4634. PubMed ID: 37556338
    [TBL] [Abstract][Full Text] [Related]  

  • 34. A Bi-level representation learning model for medical visual question answering.
    Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T
    J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 36. A reinforcement learning approach for VQA validation: An application to diabetic macular edema grading.
    Fountoukidou T; Sznitman R
    Med Image Anal; 2023 Jul; 87():102822. PubMed ID: 37182321
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 38. ALSA: Adversarial Learning of Supervised Attentions for Visual Question Answering.
    Liu Y; Zhang X; Zhao Z; Zhang B; Cheng L; Li Z
    IEEE Trans Cybern; 2022 Jun; 52(6):4520-4533. PubMed ID: 33175690
    [TBL] [Abstract][Full Text] [Related]  

  • 39. The multi-modal fusion in visual question answering: a review of attention mechanisms.
    Lu S; Liu M; Yin L; Yin Z; Liu X; Zheng W
    PeerJ Comput Sci; 2023; 9():e1400. PubMed ID: 37346665
    [TBL] [Abstract][Full Text] [Related]  

  • 40. An Effective Dense Co-Attention Networks for Visual Question Answering.
    He S; Han D
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872620
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.