These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

118 related articles for article (PubMed ID: 38647624)

  • 1. Dual modality prompt learning for visual question-grounded answering in robotic surgery.
    Zhang Y; Fan W; Peng P; Yang X; Zhou D; Wei X
    Vis Comput Ind Biomed Art; 2024 Apr; 7(1):9. PubMed ID: 38647624
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Multitask Learning for Visual Question Answering.
    Ma J; Liu J; Lin Q; Wu B; Wang Y; You Y
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1380-1394. PubMed ID: 34460390
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Robust visual question answering via polarity enhancement and contrast.
    Peng D; Li Z
    Neural Netw; 2024 Nov; 179():106560. PubMed ID: 39079376
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Advancing surgical VQA with scene graph knowledge.
    Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Beyond Bilinear: Generalized Multimodal Factorized High-Order Pooling for Visual Question Answering.
    Yu Z; Yu J; Xiang C; Fan J; Tao D
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):5947-5959. PubMed ID: 29993847
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool.
    Liu F; Xiang T; Hospedales TM; Yang W; Sun C
    IEEE Trans Pattern Anal Mach Intell; 2020 Feb; 42(2):460-474. PubMed ID: 30418897
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Dual-Attention Learning Network With Word and Sentence Embedding for Medical Visual Question Answering.
    Huang X; Gong H
    IEEE Trans Med Imaging; 2024 Feb; 43(2):832-845. PubMed ID: 37812550
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Image to English translation and comprehension: INT2-VQA method based on inter-modality and intra-modality collaborations.
    Sheng X
    PLoS One; 2023; 18(8):e0290315. PubMed ID: 37647277
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Collaborative Modality Fusion for Mitigating Language Bias in Visual Question Answering.
    Lu Q; Chen S; Zhu X
    J Imaging; 2024 Feb; 10(3):. PubMed ID: 38535137
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Knowledge-Augmented Visual Question Answering With Natural Language Explanation.
    Xie J; Cai Y; Chen J; Xu R; Wang J; Li Q
    IEEE Trans Image Process; 2024; 33():2652-2664. PubMed ID: 38546994
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Vision-Language Model for Visual Question Answering in Medical Imagery.
    Bazi Y; Rahhal MMA; Bashmal L; Zuair M
    Bioengineering (Basel); 2023 Mar; 10(3):. PubMed ID: 36978771
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Interpretable medical image Visual Question Answering via multi-modal relationship graph learning.
    Hu X; Gu L; Kobayashi K; Liu L; Zhang M; Harada T; Summers RM; Zhu Y
    Med Image Anal; 2024 Oct; 97():103279. PubMed ID: 39079429
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 18. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Structured Multimodal Attentions for TextVQA.
    Gao C; Zhu Q; Wang P; Li H; Liu Y; Hengel AVD; Wu Q
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9603-9614. PubMed ID: 34855584
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Medical visual question answering via corresponding feature fusion combined with semantic attention.
    Zhu H; He X; Wang M; Zhang M; Qing L
    Math Biosci Eng; 2022 Jul; 19(10):10192-10212. PubMed ID: 36031991
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.