These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

155 related articles for article (PubMed ID: 34852298)

  • 1. Event-centric multi-modal fusion method for dense video captioning.
    Chang Z; Zhao D; Chen H; Li J; Liu P
    Neural Netw; 2022 Feb; 146():120-129. PubMed ID: 34852298
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Fusion of Multi-Modal Features to Enhance Dense Video Caption.
    Huang X; Chan KH; Wu W; Sheng H; Ke W
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420732
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Lightweight dense video captioning with cross-modal attention and knowledge-enhanced unbiased scene graph.
    Han S; Liu J; Zhang J; Gong P; Zhang X; He H
    Complex Intell Systems; 2023 Feb; ():1-18. PubMed ID: 36855683
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Class-dependent and cross-modal memory network considering sentimental features for video-based captioning.
    Xiong H; Zhou Y; Liu J; Cai Y
    Front Psychol; 2023; 14():1124369. PubMed ID: 36874867
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Gaze-assisted automatic captioning of fetal ultrasound videos using three-way multi-modal deep neural networks.
    Alsharid M; Cai Y; Sharma H; Drukker L; Papageorghiou AT; Noble JA
    Med Image Anal; 2022 Nov; 82():102630. PubMed ID: 36223683
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Cross-Modal Graph With Meta Concepts for Video Captioning.
    Wang H; Lin G; Hoi SCH; Miao C
    IEEE Trans Image Process; 2022; 31():5150-5162. PubMed ID: 35901005
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning.
    Zhang W; Wang B; Ma L; Liu W
    IEEE Trans Pattern Anal Mach Intell; 2020 Dec; 42(12):3088-3101. PubMed ID: 31180887
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Corrigendum to "Event-centric Multi-modal Fusion Method for Dense Video Captioning" [Neural Networks 146 (2022) 120-129].
    Chang Z; Zhao D; Chen H; Li J; Liu P
    Neural Netw; 2022 Aug; 152():527. PubMed ID: 35660548
    [No Abstract]   [Full Text] [Related]  

  • 9. Syntax Customized Video Captioning by Imitating Exemplar Sentences.
    Yuan Y; Ma L; Zhu W
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):10209-10221. PubMed ID: 34847021
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Research on Video Captioning Based on Multifeature Fusion.
    Zhao H; Guo L; Chen Z; Zheng H
    Comput Intell Neurosci; 2022; 2022():1204909. PubMed ID: 35528356
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Video Captioning with Object-Aware Spatio-Temporal Correlation and Aggregation.
    Zhang J; Peng Y
    IEEE Trans Image Process; 2020 Apr; ():. PubMed ID: 32356746
    [TBL] [Abstract][Full Text] [Related]  

  • 12. From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning.
    Song J; Guo Y; Gao L; Li X; Hanjalic A; Shen HT
    IEEE Trans Neural Netw Learn Syst; 2019 Oct; 30(10):3047-3058. PubMed ID: 30130235
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Hierarchical LSTMs with Adaptive Attention for Visual Captioning.
    Gao L; Li X; Song J; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2020 May; 42(5):1112-1131. PubMed ID: 30668467
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Semantic and Relation Modulation for Audio-Visual Event Localization.
    Wang H; Zha ZJ; Li L; Chen X; Luo J
    IEEE Trans Pattern Anal Mach Intell; 2023 Jun; 45(6):7711-7725. PubMed ID: 37015417
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Describing Video With Attention-Based Bidirectional LSTM.
    Bin Y; Yang Y; Shen F; Xie N; Shen HT; Li X
    IEEE Trans Cybern; 2019 Jul; 49(7):2631-2641. PubMed ID: 29993730
    [TBL] [Abstract][Full Text] [Related]  

  • 16. What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations.
    Ilinykh N; Dobnik S
    Front Artif Intell; 2021; 4():767971. PubMed ID: 34927063
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Visual Commonsense-Aware Representation Network for Video Captioning.
    Zeng P; Zhang H; Gao L; Li X; Qian J; Shen HT
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38127607
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Concept-Aware Video Captioning: Describing Videos With Effective Prior Information.
    Yang B; Cao M; Zou Y
    IEEE Trans Image Process; 2023; 32():5366-5378. PubMed ID: 37639408
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Video captioning based on vision transformer and reinforcement learning.
    Zhao H; Chen Z; Guo L; Han Z
    PeerJ Comput Sci; 2022; 8():e916. PubMed ID: 35494808
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling.
    Chen H; Lin K; Maye A; Li J; Hu X
    Front Robot AI; 2020; 7():475767. PubMed ID: 33501293
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.