These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

145 related articles for article (PubMed ID: 38446647)

  • 1. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Pre-training Model Based on Parallel Cross-Modality Fusion Layer.
    Li X; Han D; Chang CC
    PLoS One; 2022; 17(2):e0260784. PubMed ID: 35113862
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial Learning With Multi-Modal Attention for Visual Question Answering.
    Liu Y; Zhang X; Huang F; Cheng L; Li Z
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):3894-3908. PubMed ID: 32833656
    [TBL] [Abstract][Full Text] [Related]  

  • 7. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Medical visual question answering via corresponding feature fusion combined with semantic attention.
    Zhu H; He X; Wang M; Zhang M; Qing L
    Math Biosci Eng; 2022 Jul; 19(10):10192-10212. PubMed ID: 36031991
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Complete 3D Relationships Extraction Modality Alignment Network for 3D Dense Captioning.
    Mao A; Yang Z; Chen W; Yi R; Liu YJ
    IEEE Trans Vis Comput Graph; 2024 Aug; 30(8):4867-4880. PubMed ID: 37220037
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Image to English translation and comprehension: INT2-VQA method based on inter-modality and intra-modality collaborations.
    Sheng X
    PLoS One; 2023; 18(8):e0290315. PubMed ID: 37647277
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Bi-level representation learning model for medical visual question answering.
    Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T
    J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063
    [TBL] [Abstract][Full Text] [Related]  

  • 12. USER: Unified Semantic Enhancement With Momentum Contrast for Image-Text Retrieval.
    Zhang Y; Ji Z; Wang D; Pang Y; Li X
    IEEE Trans Image Process; 2024; 33():595-609. PubMed ID: 38190676
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answering.
    Li Y; Yang Q; Wang FL; Lee LK; Qu Y; Hao T
    Artif Intell Med; 2023 Oct; 144():102667. PubMed ID: 37783542
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Advancing surgical VQA with scene graph knowledge.
    Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Learning Dual Encoding Model for Adaptive Visual Understanding in Visual Dialogue.
    Yu J; Jiang X; Qin Z; Zhang W; Hu Y; Wu Q
    IEEE Trans Image Process; 2021; 30():220-233. PubMed ID: 33141670
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Medical Visual Question Answering via Conditional Reasoning and Contrastive Learning.
    Liu B; Zhan LM; Xu L; Wu XM
    IEEE Trans Med Imaging; 2023 May; 42(5):1532-1545. PubMed ID: 37015503
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Cross-Attentional Spatio-Temporal Semantic Graph Networks for Video Question Answering.
    Liu Y; Zhang X; Huang F; Zhang B; Li Z
    IEEE Trans Image Process; 2022; 31():1684-1696. PubMed ID: 35044914
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Dual modality prompt learning for visual question-grounded answering in robotic surgery.
    Zhang Y; Fan W; Peng P; Yang X; Zhou D; Wei X
    Vis Comput Ind Biomed Art; 2024 Apr; 7(1):9. PubMed ID: 38647624
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.