These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
171 related articles for article (PubMed ID: 33739924)
1. ResKD: Residual-Guided Knowledge Distillation. Li X; Li S; Omar B; Wu F; Li X IEEE Trans Image Process; 2021; 30():4735-4746. PubMed ID: 33739924 [TBL] [Abstract][Full Text] [Related]
2. Highlight Every Step: Knowledge Distillation via Collaborative Teaching. Zhao H; Sun X; Dong J; Chen C; Dong Z IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909 [TBL] [Abstract][Full Text] [Related]
3. DCCD: Reducing Neural Network Redundancy via Distillation. Liu Y; Chen J; Liu Y IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254 [TBL] [Abstract][Full Text] [Related]
4. Restructuring the Teacher and Student in Self-Distillation. Zheng Y; Wang C; Tao C; Lin S; Qian J; Wu J IEEE Trans Image Process; 2024; 33():5551-5563. PubMed ID: 39316482 [TBL] [Abstract][Full Text] [Related]
5. Mitigating carbon footprint for knowledge distillation based deep learning model compression. Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614 [TBL] [Abstract][Full Text] [Related]
6. Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector. Shang R; Li W; Zhu S; Jiao L; Li Y Neural Netw; 2023 Jul; 164():345-356. PubMed ID: 37163850 [TBL] [Abstract][Full Text] [Related]
7. STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression. Su T; Zhang J; Yu Z; Wang G; Liu X IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10051-10064. PubMed ID: 35420989 [TBL] [Abstract][Full Text] [Related]
8. Knowledge Distillation for Semantic Segmentation Using Channel and Spatial Correlations and Adaptive Cross Entropy. Park S; Heo YS Sensors (Basel); 2020 Aug; 20(16):. PubMed ID: 32824456 [TBL] [Abstract][Full Text] [Related]
9. Adversarial learning-based multi-level dense-transmission knowledge distillation for AP-ROP detection. Xie H; Liu Y; Lei H; Song T; Yue G; Du Y; Wang T; Zhang G; Lei B Med Image Anal; 2023 Feb; 84():102725. PubMed ID: 36527770 [TBL] [Abstract][Full Text] [Related]
10. Lightweight Depth Completion Network with Local Similarity-Preserving Knowledge Distillation. Jeong Y; Park J; Cho D; Hwang Y; Choi SB; Kweon IS Sensors (Basel); 2022 Sep; 22(19):. PubMed ID: 36236485 [TBL] [Abstract][Full Text] [Related]
11. Memory-Replay Knowledge Distillation. Wang J; Zhang P; Li Y Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068 [TBL] [Abstract][Full Text] [Related]
12. Self-Distillation: Towards Efficient and Compact Neural Networks. Zhang L; Bao C; Ma K IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4388-4403. PubMed ID: 33735074 [TBL] [Abstract][Full Text] [Related]
13. Distilling a Powerful Student Model via Online Knowledge Distillation. Li S; Lin M; Wang Y; Wu Y; Tian Y; Shao L; Ji R IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):8743-8752. PubMed ID: 35254994 [TBL] [Abstract][Full Text] [Related]
14. A 3DCNN-Based Knowledge Distillation Framework for Human Activity Recognition. Ullah H; Munir A J Imaging; 2023 Apr; 9(4):. PubMed ID: 37103233 [TBL] [Abstract][Full Text] [Related]
15. Learning From Human Educational Wisdom: A Student-Centered Knowledge Distillation Method. Yang S; Yang J; Zhou M; Huang Z; Zheng WS; Yang X; Ren J IEEE Trans Pattern Anal Mach Intell; 2024 Jun; 46(6):4188-4205. PubMed ID: 38227419 [TBL] [Abstract][Full Text] [Related]
16. Spot-Adaptive Knowledge Distillation. Song J; Chen Y; Ye J; Song M IEEE Trans Image Process; 2022; 31():3359-3370. PubMed ID: 35503832 [TBL] [Abstract][Full Text] [Related]