168 related articles for article (PubMed ID: 37549611)
1. CReg-KD: Model refinement via confidence regularized knowledge distillation for brain imaging.
Yang Y; Guo X; Ye C; Xiang Y; Ma T
Med Image Anal; 2023 Oct; 89():102916. PubMed ID: 37549611
[TBL] [Abstract][Full Text] [Related]
2. MSKD: Structured knowledge distillation for efficient medical image segmentation.
Zhao L; Qian X; Guo Y; Song J; Hou J; Gong J
Comput Biol Med; 2023 Sep; 164():107284. PubMed ID: 37572439
[TBL] [Abstract][Full Text] [Related]
3. Memory-Replay Knowledge Distillation.
Wang J; Zhang P; Li Y
Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
[TBL] [Abstract][Full Text] [Related]
4. Classification of Alzheimer's disease in MRI images using knowledge distillation framework: an investigation.
Li Y; Luo J; Zhang J
Int J Comput Assist Radiol Surg; 2022 Jul; 17(7):1235-1243. PubMed ID: 35633492
[TBL] [Abstract][Full Text] [Related]
5. Teacher-student complementary sample contrastive distillation.
Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y
Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039
[TBL] [Abstract][Full Text] [Related]
6. Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.
Zhang Y; Chen Z; Yang X
Comput Biol Med; 2024 Mar; 170():108088. PubMed ID: 38320339
[TBL] [Abstract][Full Text] [Related]
7. MT4MTL-KD: A Multi-Teacher Knowledge Distillation Framework for Triplet Recognition.
Gui S; Wang Z; Chen J; Zhou X; Zhang C; Cao Y
IEEE Trans Med Imaging; 2024 Apr; 43(4):1628-1639. PubMed ID: 38127608
[TBL] [Abstract][Full Text] [Related]
8. Leveraging different learning styles for improved knowledge distillation in biomedical imaging.
Niyaz U; Sambyal AS; Bathula DR
Comput Biol Med; 2024 Jan; 168():107764. PubMed ID: 38056210
[TBL] [Abstract][Full Text] [Related]
9. Learning With Privileged Multimodal Knowledge for Unimodal Segmentation.
Chen C; Dou Q; Jin Y; Liu Q; Heng PA
IEEE Trans Med Imaging; 2022 Mar; 41(3):621-632. PubMed ID: 34633927
[TBL] [Abstract][Full Text] [Related]
10. EPANet-KD: Efficient progressive attention network for fine-grained provincial village classification via knowledge distillation.
Zhang C; Liu C; Gong H; Teng J
PLoS One; 2024; 19(2):e0298452. PubMed ID: 38359020
[TBL] [Abstract][Full Text] [Related]
11. Generalized fused group lasso regularized multi-task feature learning for predicting cognitive outcomes in Alzheimers disease.
Cao P; Liu X; Liu H; Yang J; Zhao D; Huang M; Zaiane O
Comput Methods Programs Biomed; 2018 Aug; 162():19-45. PubMed ID: 29903486
[TBL] [Abstract][Full Text] [Related]
12. Self-knowledge distillation for surgical phase recognition.
Zhang J; Barbarisi S; Kadkhodamohammadi A; Stoyanov D; Luengo I
Int J Comput Assist Radiol Surg; 2024 Jan; 19(1):61-68. PubMed ID: 37340283
[TBL] [Abstract][Full Text] [Related]
13. A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities.
Choi Y; Al-Masni MA; Jung KJ; Yoo RE; Lee SY; Kim DH
Comput Methods Programs Biomed; 2023 Oct; 240():107644. PubMed ID: 37307766
[TBL] [Abstract][Full Text] [Related]
14. FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation.
Yuan W; Lu X; Zhang R; Liu Y
Entropy (Basel); 2023 Jan; 25(1):. PubMed ID: 36673266
[TBL] [Abstract][Full Text] [Related]
15. Collaborative Knowledge Distillation via Multiknowledge Transfer.
Gou J; Sun L; Yu B; Du L; Ramamohanarao K; Tao D
IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6718-6730. PubMed ID: 36264723
[TBL] [Abstract][Full Text] [Related]
16. An ultra-fast deep-learning-based dose engine for prostate VMAT via knowledge distillation framework with limited patient data.
Tseng W; Liu H; Yang Y; Liu C; Lu B
Phys Med Biol; 2022 Dec; 68(1):. PubMed ID: 36533689
[No Abstract] [Full Text] [Related]
17. RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging.
Jaiswal A; Ashutosh K; Rousseau JF; Peng Y; Wang Z; Ding Y
Proc IEEE Int Conf Data Min; 2022; 2022():981-986. PubMed ID: 37038389
[TBL] [Abstract][Full Text] [Related]
18. Layer-Specific Knowledge Distillation for Class Incremental Semantic Segmentation.
Wang Q; Wu Y; Yang L; Zuo W; Hu Q
IEEE Trans Image Process; 2024; 33():1977-1989. PubMed ID: 38451756
[TBL] [Abstract][Full Text] [Related]
19. DSP-KD: Dual-Stage Progressive Knowledge Distillation for Skin Disease Classification.
Zeng X; Ji Z; Zhang H; Chen R; Liao Q; Wang J; Lyu T; Zhao L
Bioengineering (Basel); 2024 Jan; 11(1):. PubMed ID: 38247947
[TBL] [Abstract][Full Text] [Related]
20. Multi-view Teacher-Student Network.
Tian Y; Sun S; Tang J
Neural Netw; 2022 Feb; 146():69-84. PubMed ID: 34839092
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]