These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 38894408)

  • 1. NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation.
    Li C; Teng X; Ding Y; Lan L
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894408
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Memory-Replay Knowledge Distillation.
    Wang J; Zhang P; Li Y
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector.
    Shang R; Li W; Zhu S; Jiao L; Li Y
    Neural Netw; 2023 Jul; 164():345-356. PubMed ID: 37163850
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Decoupled graph knowledge distillation: A general logits-based method for learning MLPs on graphs.
    Tian Y; Xu S; Li M
    Neural Netw; 2024 Nov; 179():106567. PubMed ID: 39089155
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Complementary label learning based on knowledge distillation.
    Ying P; Li Z; Sun R; Xu X
    Math Biosci Eng; 2023 Sep; 20(10):17905-17918. PubMed ID: 38052542
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.
    Liu X; Ji Z; Pang Y; Han Z
    Neural Netw; 2023 Aug; 165():625-633. PubMed ID: 37364472
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
    Zhao H; Sun X; Dong J; Chen C; Dong Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Cosine similarity knowledge distillation for surface anomaly detection.
    Sheng S; Jing J; Wang Z; Zhang H
    Sci Rep; 2024 Apr; 14(1):8150. PubMed ID: 38589492
    [TBL] [Abstract][Full Text] [Related]  

  • 10. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A non-negative feedback self-distillation method for salient object detection.
    Chen L; Cao T; Zheng Y; Yang J; Wang Y; Wang Y; Zhang B
    PeerJ Comput Sci; 2023; 9():e1435. PubMed ID: 37409081
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.
    Xu TB; Liu CL
    IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):257-269. PubMed ID: 33074828
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Teacher-student complementary sample contrastive distillation.
    Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y
    Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A lightweight speech recognition method with target-swap knowledge distillation for Mandarin air traffic control communications.
    Ren J; Yang S; Shi Y; Yang J
    PeerJ Comput Sci; 2023; 9():e1650. PubMed ID: 38077570
    [TBL] [Abstract][Full Text] [Related]  

  • 15. CReg-KD: Model refinement via confidence regularized knowledge distillation for brain imaging.
    Yang Y; Guo X; Ye C; Xiang Y; Ma T
    Med Image Anal; 2023 Oct; 89():102916. PubMed ID: 37549611
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Generalized Knowledge Distillation via Relationship Matching.
    Ye HJ; Lu S; Zhan DC
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1817-1834. PubMed ID: 35298374
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adversarial Distillation for Learning with Privileged Provisions.
    Wang X; Zhang R; Sun Y; Qi J
    IEEE Trans Pattern Anal Mach Intell; 2021 Mar; 43(3):786-797. PubMed ID: 31545712
    [TBL] [Abstract][Full Text] [Related]  

  • 18. STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression.
    Su T; Zhang J; Yu Z; Wang G; Liu X
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10051-10064. PubMed ID: 35420989
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Restructuring the Teacher and Student in Self-Distillation.
    Zheng Y; Wang C; Tao C; Lin S; Qian J; Wu J
    IEEE Trans Image Process; 2024; 33():5551-5563. PubMed ID: 39316482
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Collaborative Knowledge Distillation via Multiknowledge Transfer.
    Gou J; Sun L; Yu B; Du L; Ramamohanarao K; Tao D
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6718-6730. PubMed ID: 36264723
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.