These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

111 related articles for article (PubMed ID: 39172610)

  • 1. Multimodal Cross-Lingual Summarization for Videos: A Revisit in Knowledge Distillation Induced Triple-Stage Training Method.
    Liu N; Wei K; Yang Y; Tao J; Sun X; Yao F; Yu H; Jin L; Lv Z; Fan C
    IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):10697-10714. PubMed ID: 39172610
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism.
    Argade D; Khairnar V; Vora D; Patil S; Kotecha K; Alfarhood S
    Heliyon; 2024 Feb; 10(4):e26162. PubMed ID: 38420442
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Cross-modal knowledge distillation for continuous sign language recognition.
    Gao L; Shi P; Hu L; Feng J; Zhu L; Wan L; Feng W
    Neural Netw; 2024 Nov; 179():106587. PubMed ID: 39111160
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A two-stage fine-tuning method for low-resource cross-lingual summarization.
    Zhang K; Zhang Y; Yu Z; Huang Y; Tan K
    Math Biosci Eng; 2024 Jan; 21(1):1125-1143. PubMed ID: 38303457
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Knowledge Distillation in Video-Based Human Action Recognition: An Intuitive Approach to Efficient and Flexible Model Training.
    Camarena F; Gonzalez-Mendoza M; Chang L
    J Imaging; 2024 Mar; 10(4):. PubMed ID: 38667983
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Unsupervised cross-lingual model transfer for named entity recognition with contextualized word representations.
    Yan H; Qian T; Xie L; Chen S
    PLoS One; 2021; 16(9):e0257230. PubMed ID: 34547014
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Layerwised multimodal knowledge distillation for vision-language pretrained model.
    Wang J; Liao D; Zhang Y; Xu D; Zhang X
    Neural Netw; 2024 Jul; 175():106272. PubMed ID: 38569460
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Dataset construction method of cross-lingual summarization based on filtering and text augmentation.
    Pan H; Xi Y; Wang L; Nan Y; Su Z; Cao R
    PeerJ Comput Sci; 2023; 9():e1299. PubMed ID: 37346668
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Distilling Privileged Knowledge for Anomalous Event Detection From Weakly Labeled Videos.
    Liu T; Lam KM; Kong J
    IEEE Trans Neural Netw Learn Syst; 2024 Sep; 35(9):12627-12641. PubMed ID: 37037244
    [TBL] [Abstract][Full Text] [Related]  

  • 10. High-Precision Multi-Object Tracking in Satellite Videos via Pixel-Wise Adaptive Feature Enhancement.
    Wan G; Su Z; Wu Y; Guo N; Cong D; Wei Z; Liu W; Wang G
    Sensors (Basel); 2024 Oct; 24(19):. PubMed ID: 39409528
    [TBL] [Abstract][Full Text] [Related]  

  • 11. KBStyle: Fast Style Transfer Using a 200 KB Network With Symmetric Knowledge Distillation.
    Chen W; Huang Y; Wang M; Wu X; Zeng X
    IEEE Trans Image Process; 2024; 33():82-94. PubMed ID: 38032789
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Self-knowledge distillation for surgical phase recognition.
    Zhang J; Barbarisi S; Kadkhodamohammadi A; Stoyanov D; Luengo I
    Int J Comput Assist Radiol Surg; 2024 Jan; 19(1):61-68. PubMed ID: 37340283
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Cross-Lingual Universal Dependency Parsing Only From One Monolingual Treebank.
    Sun K; Li Z; Zhao H
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13393-13407. PubMed ID: 37410651
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Abstractive text summarization of low-resourced languages using deep learning.
    Shafiq N; Hamid I; Asif M; Nawaz Q; Aljuaid H; Ali H
    PeerJ Comput Sci; 2023; 9():e1176. PubMed ID: 37346684
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Multimodal MRI segmentation of key structures for microvascular decompression via knowledge-driven mutual distillation and topological constraints.
    Tu R; Zhang D; Li C; Xiao L; Zhang Y; Cai X; Si W
    Int J Comput Assist Radiol Surg; 2024 Jul; 19(7):1329-1338. PubMed ID: 38739324
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Multimodal Distillation Pre-training Model for Ultrasound Dynamic Images Annotation.
    Chen X; Ke J; Zhang Y; Gou J; Shen A; Wan S
    IEEE J Biomed Health Inform; 2024 Aug; PP():. PubMed ID: 39102331
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Quantization via Distillation and Contrastive Learning.
    Pei Z; Yao X; Zhao W; Yu B
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37610897
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Scale-Hybrid Group Distillation with Knowledge Disentangling for Continual Semantic Segmentation.
    Song Z; Zhang X; Shi Z
    Sensors (Basel); 2023 Sep; 23(18):. PubMed ID: 37765877
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Cross-Lingual Knowledge Transferring by Structural Correspondence and Space Transfer.
    Wang D; Wu J; Yang J; Jing B; Zhang W; He X; Zhang H
    IEEE Trans Cybern; 2022 Jul; 52(7):6555-6566. PubMed ID: 33544685
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.
    Liu X; Ji Z; Pang Y; Han Z
    Neural Netw; 2023 Aug; 165():625-633. PubMed ID: 37364472
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.