These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
128 related articles for article (PubMed ID: 37030723)
1. Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition. Yang C; An Z; Zhou H; Zhuang F; Xu Y; Zhang Q IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):10212-10227. PubMed ID: 37030723 [TBL] [Abstract][Full Text] [Related]
2. Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution. Yang C; An Z; Cai L; Xu Y IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2094-2108. PubMed ID: 35820013 [TBL] [Abstract][Full Text] [Related]
3. DCCD: Reducing Neural Network Redundancy via Distillation. Liu Y; Chen J; Liu Y IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254 [TBL] [Abstract][Full Text] [Related]
4. Leveraging different learning styles for improved knowledge distillation in biomedical imaging. Niyaz U; Sambyal AS; Bathula DR Comput Biol Med; 2024 Jan; 168():107764. PubMed ID: 38056210 [TBL] [Abstract][Full Text] [Related]
5. Spot-Adaptive Knowledge Distillation. Song J; Chen Y; Ye J; Song M IEEE Trans Image Process; 2022; 31():3359-3370. PubMed ID: 35503832 [TBL] [Abstract][Full Text] [Related]
6. Knowledge Distillation Meets Label Noise Learning: Ambiguity-Guided Mutual Label Refinery. Jiang R; Yan Y; Xue JH; Chen S; Wang N; Wang H IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019631 [TBL] [Abstract][Full Text] [Related]
7. Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks. Lee S; Song BC IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):366-377. PubMed ID: 33048771 [TBL] [Abstract][Full Text] [Related]
8. Multi-task prediction-based graph contrastive learning for inferring the relationship among lncRNAs, miRNAs and diseases. Sheng N; Wang Y; Huang L; Gao L; Cao Y; Xie X; Fu Y Brief Bioinform; 2023 Sep; 24(5):. PubMed ID: 37529914 [TBL] [Abstract][Full Text] [Related]
9. STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression. Su T; Zhang J; Yu Z; Wang G; Liu X IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10051-10064. PubMed ID: 35420989 [TBL] [Abstract][Full Text] [Related]
10. A General Dynamic Knowledge Distillation Method for Visual Analytics. Tu Z; Liu X; Xiao X IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819 [TBL] [Abstract][Full Text] [Related]
11. Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation. Umapathy L; Brown T; Mushtaq R; Greenhill M; Lu J; Martin D; Altbach M; Bilgin A Med Phys; 2024 Apr; 51(4):2707-2720. PubMed ID: 37956263 [TBL] [Abstract][Full Text] [Related]
12. Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation. Chaitanya K; Erdil E; Karani N; Konukoglu E Med Image Anal; 2023 Jul; 87():102792. PubMed ID: 37054649 [TBL] [Abstract][Full Text] [Related]
13. Multistage feature fusion knowledge distillation. Li G; Wang K; Lv P; He P; Zhou Z; Xu C Sci Rep; 2024 Jun; 14(1):13373. PubMed ID: 38862547 [TBL] [Abstract][Full Text] [Related]
14. A Practical Contrastive Learning Framework for Single-Image Super-Resolution. Wu G; Jiang J; Liu X IEEE Trans Neural Netw Learn Syst; 2024 Nov; 35(11):15834-15845. PubMed ID: 37428660 [TBL] [Abstract][Full Text] [Related]
15. Teacher-student complementary sample contrastive distillation. Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039 [TBL] [Abstract][Full Text] [Related]
16. RETHINKING INTERMEDIATE LAYERS DESIGN IN KNOWLEDGE DISTILLATION FOR KIDNEY AND LIVER TUMOR SEGMENTATION. Gorade V; Mittal S; Jha D; Bagci U ArXiv; 2024 May; ():. PubMed ID: 38855539 [TBL] [Abstract][Full Text] [Related]