These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

379 related articles for article (PubMed ID: 32833656)

  • 1. Adversarial Learning With Multi-Modal Attention for Visual Question Answering.
    Liu Y; Zhang X; Huang F; Cheng L; Li Z
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):3894-3908. PubMed ID: 32833656
    [TBL] [Abstract][Full Text] [Related]  

  • 2. ALSA: Adversarial Learning of Supervised Attentions for Visual Question Answering.
    Liu Y; Zhang X; Zhao Z; Zhang B; Cheng L; Li Z
    IEEE Trans Cybern; 2022 Jun; 52(6):4520-4533. PubMed ID: 33175690
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Parallel multi-head attention and term-weighted question embedding for medical visual question answering.
    Manmadhan S; Kovoor BC
    Multimed Tools Appl; 2023 Mar; ():1-22. PubMed ID: 37362667
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Open-Ended Video Question Answering via Multi-Modal Conditional Adversarial Networks.
    Zhao Z; Xiao S; Song Z; Lu C; Xiao J; Zhuang Y
    IEEE Trans Image Process; 2020 Jan; ():. PubMed ID: 32011250
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Multi-modal adaptive gated mechanism for visual question answering.
    Xu Y; Zhang L; Shen X
    PLoS One; 2023; 18(6):e0287557. PubMed ID: 37379280
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering.
    Guo Z; Han D
    Sensors (Basel); 2020 Nov; 20(23):. PubMed ID: 33255994
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding.
    Cao Q; Li B; Liang X; Wang K; Lin L
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2758-2767. PubMed ID: 33385313
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multi-Turn Video Question Answering via Hierarchical Attention Context Reinforced Networks.
    Zhao Z; Zhang Z; Jiang X; Cai D
    IEEE Trans Image Process; 2019 Aug; 28(8):3860-3872. PubMed ID: 30835223
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Bridging the Cross-Modality Semantic Gap in Visual Question Answering.
    Wang B; Ma Y; Li X; Gao J; Hu Y; Yin B
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38446647
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool.
    Liu F; Xiang T; Hospedales TM; Yang W; Sun C
    IEEE Trans Pattern Anal Mach Intell; 2020 Feb; 42(2):460-474. PubMed ID: 30418897
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Interpretable medical image Visual Question Answering via multi-modal relationship graph learning.
    Hu X; Gu L; Kobayashi K; Liu L; Zhang M; Harada T; Summers RM; Zhu Y
    Med Image Anal; 2024 Oct; 97():103279. PubMed ID: 39079429
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Multitask Learning for Visual Question Answering.
    Ma J; Liu J; Lin Q; Wu B; Wang Y; You Y
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; 34(3):1380-1394. PubMed ID: 34460390
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in Visual Question Answering.
    Cao J; Qin X; Zhao S; Shen J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; PP():. PubMed ID: 35130171
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval.
    Qian S; Xue D; Fang Q; Xu C
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4794-4811. PubMed ID: 35788462
    [TBL] [Abstract][Full Text] [Related]  

  • 15. An effective spatial relational reasoning networks for visual question answering.
    Shen X; Han D; Chen C; Luo G; Wu Z
    PLoS One; 2022; 17(11):e0277693. PubMed ID: 36441742
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Hypergraph-Based Multi-Modal Representation for Open-Set 3D Object Retrieval.
    Feng Y; Ji S; Liu YS; Du S; Dai Q; Gao Y
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; 46(4):2206-2223. PubMed ID: 37966934
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adversarial Learning with Bidirectional Attention for Visual Question Answering.
    Li Q; Tang X; Jian Y
    Sensors (Basel); 2021 Oct; 21(21):. PubMed ID: 34770471
    [TBL] [Abstract][Full Text] [Related]  

  • 18. MRA-Net: Improving VQA Via Multi-Modal Relation Attention Network.
    Peng L; Yang Y; Wang Z; Huang Z; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jan; 44(1):318-329. PubMed ID: 32750794
    [TBL] [Abstract][Full Text] [Related]  

  • 19. MAGE: Multi-scale Context-aware Interaction based on Multi-granularity Embedding for Chinese Medical Question Answer Matching.
    Wang M; He X; Liu Y; Qing L; Zhang Z; Chen H
    Comput Methods Programs Biomed; 2023 Jan; 228():107249. PubMed ID: 36423486
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Joint Feature Synthesis and Embedding: Adversarial Cross-Modal Retrieval Revisited.
    Xu X; Lin K; Yang Y; Hanjalic A; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2022 Jun; 44(6):3030-3047. PubMed ID: 33332264
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 19.