These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

170 related articles for article (PubMed ID: 33513099)

  • 21. A General Dynamic Knowledge Distillation Method for Visual Analytics.
    Tu Z; Liu X; Xiao X
    IEEE Trans Image Process; 2022 Oct; PP():. PubMed ID: 36227819
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Learning From Human Educational Wisdom: A Student-Centered Knowledge Distillation Method.
    Yang S; Yang J; Zhou M; Huang Z; Zheng WS; Yang X; Ren J
    IEEE Trans Pattern Anal Mach Intell; 2024 Jun; 46(6):4188-4205. PubMed ID: 38227419
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Self-Distillation for Randomized Neural Networks.
    Hu M; Gao R; Suganthan PN
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37585327
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Knowledge Distillation Meets Label Noise Learning: Ambiguity-Guided Mutual Label Refinery.
    Jiang R; Yan Y; Xue JH; Chen S; Wang N; Wang H
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019631
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A Novel Approach to Classifying Breast Cancer Histopathology Biopsy Images Using Bilateral Knowledge Distillation and Label Smoothing Regularization.
    Chaudhury S; Shelke N; Sau K; Prasanalakshmi B; Shabaz M
    Comput Math Methods Med; 2021; 2021():4019358. PubMed ID: 34721657
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Classification of Alzheimer's disease in MRI images using knowledge distillation framework: an investigation.
    Li Y; Luo J; Zhang J
    Int J Comput Assist Radiol Surg; 2022 Jul; 17(7):1235-1243. PubMed ID: 35633492
    [TBL] [Abstract][Full Text] [Related]  

  • 27. AdaDFKD: Exploring adaptive inter-sample relationship in data-free knowledge distillation.
    Li J; Zhou S; Li L; Wang H; Bu J; Yu Z
    Neural Netw; 2024 Sep; 177():106386. PubMed ID: 38776761
    [TBL] [Abstract][Full Text] [Related]  

  • 28. A comprehensive study of class incremental learning algorithms for visual tasks.
    Belouadah E; Popescu A; Kanellos I
    Neural Netw; 2021 Mar; 135():38-54. PubMed ID: 33341513
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Knowledge distillation circumvents nonlinearity for optical convolutional neural networks.
    Xiang J; Colburn S; Majumdar A; Shlizerman E
    Appl Opt; 2022 Mar; 61(9):2173-2183. PubMed ID: 35333231
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Efficient knowledge distillation for liver CT segmentation using growing assistant network.
    Xu P; Kim K; Koh J; Wu D; Rim Lee Y; Young Park S; Young Tak W; Liu H; Li Q
    Phys Med Biol; 2021 Nov; 66(23):. PubMed ID: 34768246
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Knowledge Distillation for Face Photo-Sketch Synthesis.
    Zhu M; Li J; Wang N; Gao X
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; 33(2):893-906. PubMed ID: 33108298
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.
    Marini N; Otálora S; Müller H; Atzori M
    Med Image Anal; 2021 Oct; 73():102165. PubMed ID: 34303169
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Resolution-Aware Knowledge Distillation for Efficient Inference.
    Feng Z; Lai J; Xie X
    IEEE Trans Image Process; 2021; 30():6985-6996. PubMed ID: 34347598
    [TBL] [Abstract][Full Text] [Related]  

  • 34. OptiDistillNet: Learning nonlinear pulse propagation using the student-teacher model.
    Gautam N; Kaushik V; Choudhary A; Lall B
    Opt Express; 2022 Nov; 30(23):42430-42439. PubMed ID: 36366697
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Attention and feature transfer based knowledge distillation.
    Yang G; Yu S; Sheng Y; Yang H
    Sci Rep; 2023 Oct; 13(1):18369. PubMed ID: 37884556
    [TBL] [Abstract][Full Text] [Related]  

  • 36. BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection.
    Li Y; Sun Y; Zhu N
    PLoS One; 2021; 16(9):e0257130. PubMed ID: 34506549
    [TBL] [Abstract][Full Text] [Related]  

  • 37. CKDF: Cascaded Knowledge Distillation Framework for Robust Incremental Learning.
    Li K; Wan J; Yu S
    IEEE Trans Image Process; 2022; 31():3825-3837. PubMed ID: 35609094
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Rethinking the performance comparison between SNNS and ANNS.
    Deng L; Wu Y; Hu X; Liang L; Ding Y; Li G; Zhao G; Li P; Xie Y
    Neural Netw; 2020 Jan; 121():294-307. PubMed ID: 31586857
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Deep Learning: The Good, the Bad, and the Ugly.
    Serre T
    Annu Rev Vis Sci; 2019 Sep; 5():399-426. PubMed ID: 31394043
    [TBL] [Abstract][Full Text] [Related]  

  • 40. STKD: Distilling Knowledge From Synchronous Teaching for Efficient Model Compression.
    Su T; Zhang J; Yu Z; Wang G; Liu X
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10051-10064. PubMed ID: 35420989
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 9.