BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

144 related articles for article (PubMed ID: 35420989)

  • 1. STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression.
    Su T; Zhang J; Yu Z; Wang G; Liu X
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10051-10064. PubMed ID: 35420989
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
    Zhao H; Sun X; Dong J; Chen C; Dong Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
    [TBL] [Abstract][Full Text] [Related]  

  • 3. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; PP():. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks.
    Lee S; Song BC
    IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):366-377. PubMed ID: 33048771
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Pea-KD: Parameter-efficient and accurate Knowledge Distillation on BERT.
    Cho I; Kang U
    PLoS One; 2022; 17(2):e0263592. PubMed ID: 35180258
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Comprehensive learning and adaptive teaching: Distilling multi-modal knowledge for pathological glioma grading.
    Xing X; Zhu M; Chen Z; Yuan Y
    Med Image Anal; 2024 Jan; 91():102990. PubMed ID: 37864912
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Distilling Knowledge by Mimicking Features.
    Wang GH; Ge Y; Wu J
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8183-8195. PubMed ID: 34379588
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Distilling a Powerful Student Model via Online Knowledge Distillation.
    Li S; Lin M; Wang Y; Wu Y; Tian Y; Shao L; Ji R
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):8743-8752. PubMed ID: 35254994
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Knowledge Distillation Meets Label Noise Learning: Ambiguity-Guided Mutual Label Refinery.
    Jiang R; Yan Y; Xue JH; Chen S; Wang N; Wang H
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019631
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution.
    Yang C; An Z; Cai L; Xu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2094-2108. PubMed ID: 35820013
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Memory-Replay Knowledge Distillation.
    Wang J; Zhang P; Li Y
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Leveraging different learning styles for improved knowledge distillation in biomedical imaging.
    Niyaz U; Sambyal AS; Bathula DR
    Comput Biol Med; 2024 Jan; 168():107764. PubMed ID: 38056210
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Teacher-student complementary sample contrastive distillation.
    Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y
    Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector.
    Shang R; Li W; Zhu S; Jiao L; Li Y
    Neural Netw; 2023 Jul; 164():345-356. PubMed ID: 37163850
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Collaborative Knowledge Distillation via Multiknowledge Transfer.
    Gou J; Sun L; Yu B; Du L; Ramamohanarao K; Tao D
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6718-6730. PubMed ID: 36264723
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition.
    Yang C; An Z; Zhou H; Zhuang F; Xu Y; Zhang Q
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):10212-10227. PubMed ID: 37030723
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Novel Approach to Classifying Breast Cancer Histopathology Biopsy Images Using Bilateral Knowledge Distillation and Label Smoothing Regularization.
    Chaudhury S; Shelke N; Sau K; Prasanalakshmi B; Shabaz M
    Comput Math Methods Med; 2021; 2021():4019358. PubMed ID: 34721657
    [TBL] [Abstract][Full Text] [Related]  

  • 18. ResKD: Residual-Guided Knowledge Distillation.
    Li X; Li S; Omar B; Wu F; Li X
    IEEE Trans Image Process; 2021; 30():4735-4746. PubMed ID: 33739924
    [TBL] [Abstract][Full Text] [Related]  

  • 19. EPANet-KD: Efficient progressive attention network for fine-grained provincial village classification via knowledge distillation.
    Zhang C; Liu C; Gong H; Teng J
    PLoS One; 2024; 19(2):e0298452. PubMed ID: 38359020
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A 3DCNN-Based Knowledge Distillation Framework for Human Activity Recognition.
    Ullah H; Munir A
    J Imaging; 2023 Apr; 9(4):. PubMed ID: 37103233
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.