These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

157 related articles for article (PubMed ID: 34428156)

  • 1. Bilinear Graph Networks for Visual Question Answering.
    Guo D; Xu C; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):1023-1034. PubMed ID: 34428156
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Rich Visual Knowledge-Based Augmentation Network for Visual Question Answering.
    Zhang L; Liu S; Liu D; Zeng P; Li X; Song J; Gao L
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4362-4373. PubMed ID: 32941156
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep Modular Bilinear Attention Network for Visual Question Answering.
    Yan F; Silamu W; Li Y
    Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790
    [TBL] [Abstract][Full Text] [Related]  

  • 8. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Advancing surgical VQA with scene graph knowledge.
    Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Bi-level representation learning model for medical visual question answering.
    Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T
    J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063
    [TBL] [Abstract][Full Text] [Related]  

  • 12. CRIC: A VQA Dataset for Compositional Reasoning on Vision and Commonsense.
    Gao D; Wang R; Shan S; Chen X
    IEEE Trans Pattern Anal Mach Intell; 2023 May; 45(5):5561-5578. PubMed ID: 36173773
    [TBL] [Abstract][Full Text] [Related]  

  • 13. An Effective Dense Co-Attention Networks for Visual Question Answering.
    He S; Han D
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872620
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Exploring Sparse Spatial Relation in Graph Inference for Text-Based VQA.
    Zhou S; Guo D; Li J; Yang X; Wang M
    IEEE Trans Image Process; 2023; 32():5060-5074. PubMed ID: 37669188
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Learning to Reason on Tree Structures for Knowledge-Based Visual Question Answering.
    Li Q; Tang X; Jian Y
    Sensors (Basel); 2022 Feb; 22(4):. PubMed ID: 35214484
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Comprehensive Survey of Scene Graphs: Generation and Application.
    Chang X; Ren P; Xu P; Li Z; Chen X; Hauptmann A
    IEEE Trans Pattern Anal Mach Intell; 2023 Jan; 45(1):1-26. PubMed ID: 34941499
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Multitask Learning for Visual Question Answering.
    Ma J; Liu J; Lin Q; Wu B; Wang Y; You Y
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1380-1394. PubMed ID: 34460390
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Relation-Aware Fine-Grained Reasoning Network for Textbook Question Answering.
    Ma J; Liu J; Wang Y; Li J; Liu T
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):15-27. PubMed ID: 34181555
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Structured Multimodal Attentions for TextVQA.
    Gao C; Zhu Q; Wang P; Li H; Liu Y; Hengel AVD; Wu Q
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9603-9614. PubMed ID: 34855584
    [TBL] [Abstract][Full Text] [Related]  

  • 20. BPI-MVQA: a bi-branch model for medical visual question answering.
    Liu S; Zhang X; Zhou X; Yang J
    BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.