These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

111 related articles for article (PubMed ID: 35113862)

  • 1. Pre-training Model Based on Parallel Cross-Modality Fusion Layer.
    Li X; Han D; Chang CC
    PLoS One; 2022; 17(2):e0260784. PubMed ID: 35113862
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Deep Modular Bilinear Attention Network for Visual Question Answering.
    Yan F; Silamu W; Li Y
    Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Vision-Language Transformer for Interpretable Pathology Visual Question Answering.
    Naseem U; Khushi M; Kim J
    IEEE J Biomed Health Inform; 2023 Apr; 27(4):1681-1690. PubMed ID: 35358054
    [TBL] [Abstract][Full Text] [Related]  

  • 7. An Effective Dense Co-Attention Networks for Visual Question Answering.
    He S; Han D
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872620
    [TBL] [Abstract][Full Text] [Related]  

  • 8. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Vision-Language Model for Visual Question Answering in Medical Imagery.
    Bazi Y; Rahhal MMA; Bashmal L; Zuair M
    Bioengineering (Basel); 2023 Mar; 10(3):. PubMed ID: 36978771
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Bi-level representation learning model for medical visual question answering.
    Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T
    J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Advancing surgical VQA with scene graph knowledge.
    Yuan K; Kattel M; Lavanchy JL; Navab N; Srivastav V; Padoy N
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1409-1417. PubMed ID: 38780829
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Collaborative Modality Fusion for Mitigating Language Bias in Visual Question Answering.
    Lu Q; Chen S; Zhu X
    J Imaging; 2024 Feb; 10(3):. PubMed ID: 38535137
    [TBL] [Abstract][Full Text] [Related]  

  • 15. What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations.
    Ilinykh N; Dobnik S
    Front Artif Intell; 2021; 4():767971. PubMed ID: 34927063
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Plenty is Plague: Fine-Grained Learning for Visual Question Answering.
    Zhou Y; Ji R; Sun X; Su J; Meng D; Gao Y; Shen C
    IEEE Trans Pattern Anal Mach Intell; 2022 Feb; 44(2):697-709. PubMed ID: 31796387
    [TBL] [Abstract][Full Text] [Related]  

  • 17. ALSA: Adversarial Learning of Supervised Attentions for Visual Question Answering.
    Liu Y; Zhang X; Zhao Z; Zhang B; Cheng L; Li Z
    IEEE Trans Cybern; 2022 Jun; 52(6):4520-4533. PubMed ID: 33175690
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Question-and-Answer System to Extract Data From Free-Text Oncological Pathology Reports (CancerBERT Network): Development Study.
    Mitchell JR; Szepietowski P; Howard R; Reisman P; Jones JD; Lewis P; Fridley BL; Rollison DE
    J Med Internet Res; 2022 Mar; 24(3):e27210. PubMed ID: 35319481
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.