These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 38894408)

  • 21. A General Dynamic Knowledge Distillation Method for Visual Analytics.
    Tu Z; Liu X; Xiao X
    IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Dual Balanced Class-Incremental Learning With im-Softmax and Angular Rectification.
    Zhi R; Meng Y; Hou J; Wan J
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38442059
    [TBL] [Abstract][Full Text] [Related]  

  • 23. EPANet-KD: Efficient progressive attention network for fine-grained provincial village classification via knowledge distillation.
    Zhang C; Liu C; Gong H; Teng J
    PLoS One; 2024; 19(2):e0298452. PubMed ID: 38359020
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.
    Zhang Y; Chen Z; Yang X
    Comput Biol Med; 2024 Mar; 170():108088. PubMed ID: 38320339
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Knowledge Distillation Meets Label Noise Learning: Ambiguity-Guided Mutual Label Refinery.
    Jiang R; Yan Y; Xue JH; Chen S; Wang N; Wang H
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019631
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Leveraging different learning styles for improved knowledge distillation in biomedical imaging.
    Niyaz U; Sambyal AS; Bathula DR
    Comput Biol Med; 2024 Jan; 168():107764. PubMed ID: 38056210
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Enhancing Few-Shot Learning in Lightweight Models via Dual-Faceted Knowledge Distillation.
    Zhou B; Cheng T; Zhao J; Yan C; Jiang L; Zhang X; Gu J
    Sensors (Basel); 2024 Mar; 24(6):. PubMed ID: 38544077
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Specific Expert Learning: Enriching Ensemble Diversity via Knowledge Distillation.
    Kao WC; Xie HX; Lin CY; Cheng WH
    IEEE Trans Cybern; 2023 Apr; 53(4):2494-2505. PubMed ID: 34793316
    [TBL] [Abstract][Full Text] [Related]  

  • 29. DSP-KD: Dual-Stage Progressive Knowledge Distillation for Skin Disease Classification.
    Zeng X; Ji Z; Zhang H; Chen R; Liao Q; Wang J; Lyu T; Zhao L
    Bioengineering (Basel); 2024 Jan; 11(1):. PubMed ID: 38247947
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Sample self-selection using dual teacher networks for pathological image classification with noisy labels.
    Han G; Guo W; Zhang H; Jin J; Gan X; Zhao X
    Comput Biol Med; 2024 May; 174():108489. PubMed ID: 38640633
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms.
    Li L; Su W; Liu F; He M; Liang X
    Neural Process Lett; 2023 Jan; ():1-16. PubMed ID: 36619739
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Self-Distillation for Randomized Neural Networks.
    Hu M; Gao R; Suganthan PN
    IEEE Trans Neural Netw Learn Syst; 2024 Nov; 35(11):16119-16128. PubMed ID: 37585327
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Continual Learning With Knowledge Distillation: A Survey.
    Li S; Su T; Zhang XY; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2024 Oct; PP():. PubMed ID: 39423075
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Using ensembles and distillation to optimize the deployment of deep learning models for the classification of electronic cancer pathology reports.
    De Angeli K; Gao S; Blanchard A; Durbin EB; Wu XC; Stroup A; Doherty J; Schwartz SM; Wiggins C; Coyle L; Penberthy L; Tourassi G; Yoon HJ
    JAMIA Open; 2022 Oct; 5(3):ooac075. PubMed ID: 36110150
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Tolerant Self-Distillation for image classification.
    Liu M; Yu Y; Ji Z; Han J; Zhang Z
    Neural Netw; 2024 Jun; 174():106215. PubMed ID: 38471261
    [TBL] [Abstract][Full Text] [Related]  

  • 36. LLM-Enhanced Multi-Teacher Knowledge Distillation for Modality-Incomplete Emotion Recognition in Daily Healthcare.
    Zhang Y; Liu H; Xiao Y; Amoon M; Zhang D; Wang D; Yang S; Quek C
    IEEE J Biomed Health Inform; 2024 Sep; PP():. PubMed ID: 39348250
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Multi-view Teacher-Student Network.
    Tian Y; Sun S; Tang J
    Neural Netw; 2022 Feb; 146():69-84. PubMed ID: 34839092
    [TBL] [Abstract][Full Text] [Related]  

  • 38. MSKD: Structured knowledge distillation for efficient medical image segmentation.
    Zhao L; Qian X; Guo Y; Song J; Hou J; Gong J
    Comput Biol Med; 2023 Sep; 164():107284. PubMed ID: 37572439
    [TBL] [Abstract][Full Text] [Related]  

  • 39. ResKD: Residual-Guided Knowledge Distillation.
    Li X; Li S; Omar B; Wu F; Li X
    IEEE Trans Image Process; 2021; 30():4735-4746. PubMed ID: 33739924
    [TBL] [Abstract][Full Text] [Related]  

  • 40. ABUS tumor segmentation via decouple contrastive knowledge distillation.
    Pan P; Li Y; Chen H; Sun J; Li X; Cheng L
    Phys Med Biol; 2023 Dec; 69(1):. PubMed ID: 38052091
    [No Abstract]   [Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.