These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

233 related articles for article (PubMed ID: 33267071)

  • 1. Multistructure-Based Collaborative Online Distillation.
    Gao L; Lan X; Mi H; Feng D; Xu K; Peng Y
    Entropy (Basel); 2019 Apr; 21(4):. PubMed ID: 33267071
    [TBL] [Abstract][Full Text] [Related]  

  • 2. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Self-Distillation: Towards Efficient and Compact Neural Networks.
    Zhang L; Bao C; Ma K
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4388-4403. PubMed ID: 33735074
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Adaptive Search-and-Training for Robust and Efficient Network Pruning.
    Lu X; Dong W; Li X; Wu J; Li L; Shi G
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9325-9338. PubMed ID: 37027639
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Specific Expert Learning: Enriching Ensemble Diversity via Knowledge Distillation.
    Kao WC; Xie HX; Lin CY; Cheng WH
    IEEE Trans Cybern; 2023 Apr; 53(4):2494-2505. PubMed ID: 34793316
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
    Zhao H; Sun X; Dong J; Chen C; Dong Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Cross-domain visual prompting with spatial proximity knowledge distillation for histological image classification.
    Li X; Huang G; Cheng L; Zhong G; Liu W; Chen X; Cai M
    J Biomed Inform; 2024 Oct; 158():104728. PubMed ID: 39307515
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Cyclic Differentiable Architecture Search.
    Yu H; Peng H; Huang Y; Fu J; Du H; Wang L; Ling H
    IEEE Trans Pattern Anal Mach Intell; 2023 Jan; 45(1):211-228. PubMed ID: 35196225
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Learning lightweight tea detector with reconstructed feature and dual distillation.
    Zheng Z; Zuo G; Zhang W; Zhang C; Zhang J; Rao Y; Jiang Z
    Sci Rep; 2024 Oct; 14(1):23669. PubMed ID: 39390063
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Differential convolutional neural network.
    Sarıgül M; Ozyildirim BM; Avci M
    Neural Netw; 2019 Aug; 116():279-287. PubMed ID: 31125914
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.
    Xu TB; Liu CL
    IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):257-269. PubMed ID: 33074828
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Relieving the Incompatibility of Network Representation and Classification for Long-Tailed Data Distribution.
    Hu H; Gao M; Wu M
    Comput Intell Neurosci; 2021; 2021():6702625. PubMed ID: 34987568
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Improving Differentiable Architecture Search via self-distillation.
    Zhu X; Li J; Liu Y; Wang W
    Neural Netw; 2023 Oct; 167():656-667. PubMed ID: 37717323
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Self-knowledge distillation for surgical phase recognition.
    Zhang J; Barbarisi S; Kadkhodamohammadi A; Stoyanov D; Luengo I
    Int J Comput Assist Radiol Surg; 2024 Jan; 19(1):61-68. PubMed ID: 37340283
    [TBL] [Abstract][Full Text] [Related]  

  • 16. IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation.
    Fan X; Zhang H; Zhang Y
    Biomimetics (Basel); 2023 Aug; 8(4):. PubMed ID: 37622980
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Memory-Replay Knowledge Distillation.
    Wang J; Zhang P; Li Y
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33921068
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Importance-aware adaptive dataset distillation.
    Li G; Togo R; Ogawa T; Haseyama M
    Neural Netw; 2024 Apr; 172():106154. PubMed ID: 38309137
    [TBL] [Abstract][Full Text] [Related]  

  • 19. RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging.
    Jaiswal A; Ashutosh K; Rousseau JF; Peng Y; Wang Z; Ding Y
    Proc IEEE Int Conf Data Min; 2022; 2022():981-986. PubMed ID: 37038389
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Deep Neural Network Compression by In-Parallel Pruning-Quantization.
    Tung F; Mori G
    IEEE Trans Pattern Anal Mach Intell; 2020 Mar; 42(3):568-579. PubMed ID: 30561340
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.