These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

189 related articles for article (PubMed ID: 30130235)

  • 1. From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning.
    Song J; Guo Y; Gao L; Li X; Hanjalic A; Shen HT
    IEEE Trans Neural Netw Learn Syst; 2019 Oct; 30(10):3047-3058. PubMed ID: 30130235
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Describing Video With Attention-Based Bidirectional LSTM.
    Bin Y; Yang Y; Shen F; Xie N; Shen HT; Li X
    IEEE Trans Cybern; 2019 Jul; 49(7):2631-2641. PubMed ID: 29993730
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Hierarchical LSTMs with Adaptive Attention for Visual Captioning.
    Gao L; Li X; Song J; Shen HT
    IEEE Trans Pattern Anal Mach Intell; 2020 May; 42(5):1112-1131. PubMed ID: 30668467
    [TBL] [Abstract][Full Text] [Related]  

  • 4. SibNet: Sibling Convolutional Encoder for Video Captioning.
    Liu S; Ren Z; Yuan J
    IEEE Trans Pattern Anal Mach Intell; 2021 Sep; 43(9):3259-3272. PubMed ID: 32149622
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Video Captioning by Adversarial LSTM.
    Yang Y; Zhou J; Ai J; Bin Y; Hanjalic A; Shen HT; Ji Y
    IEEE Trans Image Process; 2018 Jul; ():. PubMed ID: 30010568
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning.
    Zhang W; Wang B; Ma L; Liu W
    IEEE Trans Pattern Anal Mach Intell; 2020 Dec; 42(12):3088-3101. PubMed ID: 31180887
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Self-Guiding Multimodal LSTM-When We Do Not Have a Perfect Training Dataset for Image Captioning.
    Xian Y; Tian Y
    IEEE Trans Image Process; 2019 Nov; 28(11):5241-5252. PubMed ID: 31135361
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Video captioning with stacked attention and semantic hard pull.
    Rahman MM; Abedin T; Prottoy KSS; Moshruba A; Siddiqui FH
    PeerJ Comput Sci; 2021; 7():e664. PubMed ID: 34435104
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Video Captioning Using Global-Local Representation.
    Yan L; Ma S; Wang Q; Chen Y; Zhang X; Savakis A; Liu D
    IEEE Trans Circuits Syst Video Technol; 2022 Oct; 32(10):6642-6656. PubMed ID: 37215187
    [TBL] [Abstract][Full Text] [Related]  

  • 10. CAM-RNN: Co-Attention Model Based RNN for Video Captioning.
    Zhao B; Li X; Lu X
    IEEE Trans Image Process; 2019 Nov; 28(11):5552-5565. PubMed ID: 31107650
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Image Captioning and Visual Question Answering Based on Attributes and External Knowledge.
    Wu Q; Shen C; Wang P; Dick A; van den Hengel A
    IEEE Trans Pattern Anal Mach Intell; 2018 Jun; 40(6):1367-1381. PubMed ID: 28574341
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Aligning Source Visual and Target Language Domains for Unpaired Video Captioning.
    Liu F; Wu X; You C; Ge S; Zou Y; Sun X
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9255-9268. PubMed ID: 34855588
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Concept-Aware Video Captioning: Describing Videos With Effective Prior Information.
    Yang B; Cao M; Zou Y
    IEEE Trans Image Process; 2023; 32():5366-5378. PubMed ID: 37639408
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Video captioning based on vision transformer and reinforcement learning.
    Zhao H; Chen Z; Guo L; Han Z
    PeerJ Comput Sci; 2022; 8():e916. PubMed ID: 35494808
    [TBL] [Abstract][Full Text] [Related]  

  • 15. UAT: Universal Attention Transformer for Video Captioning.
    Im H; Choi YS
    Sensors (Basel); 2022 Jun; 22(13):. PubMed ID: 35808316
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Emotional Video Captioning With Vision-Based Emotion Interpretation Network.
    Song P; Guo D; Yang X; Tang S; Wang M
    IEEE Trans Image Process; 2024; 33():1122-1135. PubMed ID: 38300778
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling.
    Chen H; Lin K; Maye A; Li J; Hu X
    Front Robot AI; 2020; 7():475767. PubMed ID: 33501293
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Fusion of Multi-Modal Features to Enhance Dense Video Caption.
    Huang X; Chan KH; Wu W; Sheng H; Ke W
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420732
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Syntax Customized Video Captioning by Imitating Exemplar Sentences.
    Yuan Y; Ma L; Zhu W
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):10209-10221. PubMed ID: 34847021
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Event-centric multi-modal fusion method for dense video captioning.
    Chang Z; Zhao D; Chen H; Li J; Liu P
    Neural Netw; 2022 Feb; 146():120-129. PubMed ID: 34852298
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.