BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

132 related articles for article (PubMed ID: 38569460)

  • 1. Layerwised multimodal knowledge distillation for vision-language pretrained model.
    Wang J; Liao D; Zhang Y; Xu D; Zhang X
    Neural Netw; 2024 Jul; 175():106272. PubMed ID: 38569460
    [TBL] [Abstract][Full Text] [Related]  

  • 2. LAD: Layer-Wise Adaptive Distillation for BERT Model Compression.
    Lin YJ; Chen KY; Kao HY
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772523
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
    Zhao H; Sun X; Dong J; Chen C; Dong Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities.
    Choi Y; Al-Masni MA; Jung KJ; Yoo RE; Lee SY; Kim DH
    Comput Methods Programs Biomed; 2023 Oct; 240():107644. PubMed ID: 37307766
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Novel Approach to Classifying Breast Cancer Histopathology Biopsy Images Using Bilateral Knowledge Distillation and Label Smoothing Regularization.
    Chaudhury S; Shelke N; Sau K; Prasanalakshmi B; Shabaz M
    Comput Math Methods Med; 2021; 2021():4019358. PubMed ID: 34721657
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Prototype-based sample-weighted distillation unified framework adapted to missing modality sentiment analysis.
    Zhang Y; Liu F; Zhuang X; Hou Y; Zhang Y
    Neural Netw; 2024 Sep; 177():106397. PubMed ID: 38805799
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Graph Flow: Cross-Layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation.
    Zou W; Qi X; Zhou W; Sun M; Sun Z; Shan C
    IEEE Trans Med Imaging; 2023 Apr; 42(4):1159-1171. PubMed ID: 36423314
    [TBL] [Abstract][Full Text] [Related]  

  • 8. AdaDFKD: Exploring adaptive inter-sample relationship in data-free knowledge distillation.
    Li J; Zhou S; Li L; Wang H; Bu J; Yu Z
    Neural Netw; 2024 Sep; 177():106386. PubMed ID: 38776761
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Constrained Adaptive Distillation Based on Topological Persistence for Wearable Sensor Data.
    Jeon ES; Choi H; Shukla A; Wang Y; Buman MP; Turaga P
    IEEE Trans Instrum Meas; 2023; 72():. PubMed ID: 38818128
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Multiscale knowledge distillation with attention based fusion for robust human activity recognition.
    Yuan Z; Yang Z; Ning H; Tang X
    Sci Rep; 2024 May; 14(1):12411. PubMed ID: 38816446
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Self-Distillation: Towards Efficient and Compact Neural Networks.
    Zhang L; Bao C; Ma K
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4388-4403. PubMed ID: 33735074
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.
    Zhang Y; Chen Z; Yang X
    Comput Biol Med; 2024 Mar; 170():108088. PubMed ID: 38320339
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Data-Efficient Sensor Upgrade Path Using Knowledge Distillation.
    Van Molle P; De Boom C; Verbelen T; Vankeirsbilck B; De Vylder J; Diricx B; Simoens P; Dhoedt B
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640843
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Learning Student Networks via Feature Embedding.
    Chen H; Wang Y; Xu C; Xu C; Tao D
    IEEE Trans Neural Netw Learn Syst; 2021 Jan; 32(1):25-35. PubMed ID: 32092018
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Inferior and Coordinate Distillation for Object Detectors.
    Zhang Y; Li Y; Pan Z
    Sensors (Basel); 2022 Jul; 22(15):. PubMed ID: 35957276
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning With Privileged Multimodal Knowledge for Unimodal Segmentation.
    Chen C; Dou Q; Jin Y; Liu Q; Heng PA
    IEEE Trans Med Imaging; 2022 Mar; 41(3):621-632. PubMed ID: 34633927
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Multi-view Teacher-Student Network.
    Tian Y; Sun S; Tang J
    Neural Netw; 2022 Feb; 146():69-84. PubMed ID: 34839092
    [TBL] [Abstract][Full Text] [Related]  

  • 18. DDK: Dynamic structure pruning based on differentiable search and recursive knowledge distillation for BERT.
    Zhang Z; Lu Y; Wang T; Wei X; Wei Z
    Neural Netw; 2024 May; 173():106164. PubMed ID: 38367353
    [TBL] [Abstract][Full Text] [Related]  

  • 19. BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection.
    Li Y; Sun Y; Zhu N
    PLoS One; 2021; 16(9):e0257130. PubMed ID: 34506549
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Adversarial Distillation for Learning with Privileged Provisions.
    Wang X; Zhang R; Sun Y; Qi J
    IEEE Trans Pattern Anal Mach Intell; 2021 Mar; 43(3):786-797. PubMed ID: 31545712
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.