These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

143 related articles for article (PubMed ID: 33817003)

  • 1. Joint embedding VQA model based on dynamic word vector.
    Ma Z; Zheng W; Chen X; Yin L
    PeerJ Comput Sci; 2021; 7():e353. PubMed ID: 33817003
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Dual-Attention Learning Network With Word and Sentence Embedding for Medical Visual Question Answering.
    Huang X; Gong H
    IEEE Trans Med Imaging; 2024 Feb; 43(2):832-845. PubMed ID: 37812550
    [TBL] [Abstract][Full Text] [Related]  

  • 4. BPI-MVQA: a bi-branch model for medical visual question answering.
    Liu S; Zhang X; Zhou X; Yang J
    BMC Med Imaging; 2022 Apr; 22(1):79. PubMed ID: 35488285
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Topic Recognition Method of News Text Based on Word Embedding Enhancement.
    Du Q; Li N; Liu W; Sun D; Yang S; Yue F
    Comput Intell Neurosci; 2022; 2022():4582480. PubMed ID: 35222628
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 8. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep Modular Bilinear Attention Network for Visual Question Answering.
    Yan F; Silamu W; Li Y
    Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161790
    [TBL] [Abstract][Full Text] [Related]  

  • 10. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Text Matching in Insurance Question-Answering Community Based on an Integrated BiLSTM-TextCNN Model Fusing Multi-Feature.
    Li Z; Yang X; Zhou L; Jia H; Li W
    Entropy (Basel); 2023 Apr; 25(4):. PubMed ID: 37190427
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A combination of TEXTCNN model and Bayesian classifier for microblog sentiment analysis.
    Wang Z; Yao L; Shao X; Wang H
    J Comb Optim; 2023; 45(4):109. PubMed ID: 37200571
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Text Sentiment Classification Based on BERT Embedding and Sliced Multi-Head Self-Attention Bi-GRU.
    Zhang X; Wu Z; Liu K; Zhao Z; Wang J; Wu C
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772522
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adversarial Learning With Multi-Modal Attention for Visual Question Answering.
    Liu Y; Zhang X; Huang F; Cheng L; Li Z
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):3894-3908. PubMed ID: 32833656
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adversarial Learning with Bidirectional Attention for Visual Question Answering.
    Li Q; Tang X; Jian Y
    Sensors (Basel); 2021 Oct; 21(21):. PubMed ID: 34770471
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Method of Short Text Representation Based on the Feature Probability Embedded Vector.
    Zhou W; Wang H; Sun H; Sun T
    Sensors (Basel); 2019 Aug; 19(17):. PubMed ID: 31466389
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Asymmetric cross-modal attention network with multimodal augmented mixup for medical visual question answering.
    Li Y; Yang Q; Wang FL; Lee LK; Qu Y; Hao T
    Artif Intell Med; 2023 Oct; 144():102667. PubMed ID: 37783542
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.