123 related articles for article (PubMed ID: 35877790)
1. Improving Knowledge Distillation With a Customized Teacher.
Tan C; Liu J
IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2290-2299. PubMed ID: 35877790
[TBL] [Abstract][Full Text] [Related]
2. FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation.
Yuan W; Lu X; Zhang R; Liu Y
Entropy (Basel); 2023 Jan; 25(1):. PubMed ID: 36673266
[TBL] [Abstract][Full Text] [Related]
3. Teacher-student complementary sample contrastive distillation.
Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y
Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039
[TBL] [Abstract][Full Text] [Related]
4. Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector.
Shang R; Li W; Zhu S; Jiao L; Li Y
Neural Netw; 2023 Jul; 164():345-356. PubMed ID: 37163850
[TBL] [Abstract][Full Text] [Related]
5. RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis.
Kim H; Kwak TY; Chang H; Kim SW; Kim I
Bioengineering (Basel); 2023 Nov; 10(11):. PubMed ID: 38002403
[TBL] [Abstract][Full Text] [Related]
6. Generalized Knowledge Distillation via Relationship Matching.
Ye HJ; Lu S; Zhan DC
IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1817-1834. PubMed ID: 35298374
[TBL] [Abstract][Full Text] [Related]
7. A General Dynamic Knowledge Distillation Method for Visual Analytics.
Tu Z; Liu X; Xiao X
IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819
[TBL] [Abstract][Full Text] [Related]
8. Memory-Replay Knowledge Distillation.
Wang J; Zhang P; Li Y
Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
[TBL] [Abstract][Full Text] [Related]
9. MSKD: Structured knowledge distillation for efficient medical image segmentation.
Zhao L; Qian X; Guo Y; Song J; Hou J; Gong J
Comput Biol Med; 2023 Sep; 164():107284. PubMed ID: 37572439
[TBL] [Abstract][Full Text] [Related]
10. The student's drawing of teacher's pictorial Value as a predictor of the student-teacher relationship and school adjustment.
Di Norcia A; Bombi AS; Pinto G; Cannoni E
Front Psychol; 2022; 13():1006568. PubMed ID: 36389493
[TBL] [Abstract][Full Text] [Related]
11. Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks.
Lee S; Song BC
IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):366-377. PubMed ID: 33048771
[TBL] [Abstract][Full Text] [Related]
12. Research on a lightweight electronic component detection method based on knowledge distillation.
Xia Z; Gu J; Wang W; Huang Z
Math Biosci Eng; 2023 Nov; 20(12):20971-20994. PubMed ID: 38124584
[TBL] [Abstract][Full Text] [Related]
13. Multi-view Teacher-Student Network.
Tian Y; Sun S; Tang J
Neural Netw; 2022 Feb; 146():69-84. PubMed ID: 34839092
[TBL] [Abstract][Full Text] [Related]
14. Learning With Privileged Multimodal Knowledge for Unimodal Segmentation.
Chen C; Dou Q; Jin Y; Liu Q; Heng PA
IEEE Trans Med Imaging; 2022 Mar; 41(3):621-632. PubMed ID: 34633927
[TBL] [Abstract][Full Text] [Related]
15. On Representation Knowledge Distillation for Graph Neural Networks.
Joshi CK; Liu F; Xun X; Lin J; Foo CS
IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):4656-4667. PubMed ID: 36459610
[TBL] [Abstract][Full Text] [Related]
16. Distilling Knowledge by Mimicking Features.
Wang GH; Ge Y; Wu J
IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8183-8195. PubMed ID: 34379588
[TBL] [Abstract][Full Text] [Related]
17. Pea-KD: Parameter-efficient and accurate Knowledge Distillation on BERT.
Cho I; Kang U
PLoS One; 2022; 17(2):e0263592. PubMed ID: 35180258
[TBL] [Abstract][Full Text] [Related]
18. DCCD: Reducing Neural Network Redundancy via Distillation.
Liu Y; Chen J; Liu Y
IEEE Trans Neural Netw Learn Syst; 2023 Jan; PP():. PubMed ID: 37022254
[TBL] [Abstract][Full Text] [Related]
19. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
Zhao H; Sun X; Dong J; Chen C; Dong Z
IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
[TBL] [Abstract][Full Text] [Related]
20. Layerwised multimodal knowledge distillation for vision-language pretrained model.
Wang J; Liao D; Zhang Y; Xu D; Zhang X
Neural Netw; 2024 Jul; 175():106272. PubMed ID: 38569460
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]