These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 35358054)

  • 1. Vision-Language Transformer for Interpretable Pathology Visual Question Answering.
    Naseem U; Khushi M; Kim J
    IEEE J Biomed Health Inform; 2023 Apr; 27(4):1681-1690. PubMed ID: 35358054
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Vision-Language Model for Visual Question Answering in Medical Imagery.
    Bazi Y; Rahhal MMA; Bashmal L; Zuair M
    Bioengineering (Basel); 2023 Mar; 10(3):. PubMed ID: 36978771
    [TBL] [Abstract][Full Text] [Related]  

  • 3. K-PathVQA: Knowledge-Aware Multimodal Representation for Pathology Visual Question Answering.
    Naseem U; Khushi M; Dunn AG; Kim J
    IEEE J Biomed Health Inform; 2023 Jul; PP():. PubMed ID: 37432797
    [TBL] [Abstract][Full Text] [Related]  

  • 4. VQAMix: Conditional Triplet Mixup for Medical Visual Question Answering.
    Gong H; Chen G; Mao M; Li Z; Li G
    IEEE Trans Med Imaging; 2022 Nov; 41(11):3332-3343. PubMed ID: 35727773
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A Bi-level representation learning model for medical visual question answering.
    Li Y; Long S; Yang Z; Weng H; Zeng K; Huang Z; Lee Wang F; Hao T
    J Biomed Inform; 2022 Oct; 134():104183. PubMed ID: 36038063
    [TBL] [Abstract][Full Text] [Related]  

  • 7. COIN: Counterfactual Image Generation for Visual Question Answering Interpretation.
    Boukhers Z; Hartmann T; Jürjens J
    Sensors (Basel); 2022 Mar; 22(6):. PubMed ID: 35336415
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Pre-training Model Based on Parallel Cross-Modality Fusion Layer.
    Li X; Han D; Chang CC
    PLoS One; 2022; 17(2):e0260784. PubMed ID: 35113862
    [TBL] [Abstract][Full Text] [Related]  

  • 9. BPI-MVQA: a bi-branch model for medical visual question answering.
    Liu S; Zhang X; Zhou X; Yang J
    BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285
    [TBL] [Abstract][Full Text] [Related]  

  • 10. 3D Question Answering.
    Ye S; Chen D; Han S; Liao J
    IEEE Trans Vis Comput Graph; 2024 Mar; 30(3):1772-1786. PubMed ID: 36446015
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Medical visual question answering based on question-type reasoning and semantic space constraint.
    Wang M; He X; Liu L; Qing L; Chen H; Liu Y; Ren C
    Artif Intell Med; 2022 Sep; 131():102346. PubMed ID: 36100340
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Rich Visual Knowledge-Based Augmentation Network for Visual Question Answering.
    Zhang L; Liu S; Liu D; Zeng P; Li X; Song J; Gao L
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4362-4373. PubMed ID: 32941156
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Dual encoder network with transformer-CNN for multi-organ segmentation.
    Hong Z; Chen M; Hu W; Yan S; Qu A; Chen L; Chen J
    Med Biol Eng Comput; 2023 Mar; 61(3):661-671. PubMed ID: 36580181
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Medical visual question answering: A survey.
    Lin Z; Zhang D; Tao Q; Shi D; Haffari G; Wu Q; He M; Ge Z
    Artif Intell Med; 2023 Sep; 143():102611. PubMed ID: 37673579
    [TBL] [Abstract][Full Text] [Related]  

  • 15. FVQA: Fact-based Visual Question Answering.
    Wang P; Wu Q; Shen C; Dick A; Hengel AVD
    IEEE Trans Pattern Anal Mach Intell; 2018 Oct; 40(10):2413-2427. PubMed ID: 28945588
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Deep Modular Bilinear Attention Network for Visual Question Answering.
    Yan F; Silamu W; Li Y
    Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790
    [TBL] [Abstract][Full Text] [Related]  

  • 18. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The multi-modal fusion in visual question answering: a review of attention mechanisms.
    Lu S; Liu M; Yin L; Yin Z; Liu X; Zheng W
    PeerJ Comput Sci; 2023; 9():e1400. PubMed ID: 37346665
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Loss Re-Scaling VQA: Revisiting the Language Prior Problem From a Class-Imbalance View.
    Guo Y; Nie L; Cheng Z; Tian Q; Zhang M
    IEEE Trans Image Process; 2022; 31():227-238. PubMed ID: 34847029
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.