BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

122 related articles for article (PubMed ID: 38367353)

  • 1. DDK: Dynamic structure pruning based on differentiable search and recursive knowledge distillation for BERT.
    Zhang Z; Lu Y; Wang T; Wei X; Wei Z
    Neural Netw; 2024 May; 173():106164. PubMed ID: 38367353
    [TBL] [Abstract][Full Text] [Related]  

  • 2. LAD: Layer-Wise Adaptive Distillation for BERT Model Compression.
    Lin YJ; Chen KY; Kao HY
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772523
    [TBL] [Abstract][Full Text] [Related]  

  • 3. AUBER: Automated BERT regularization.
    Lee HD; Lee S; Kang U
    PLoS One; 2021; 16(6):e0253241. PubMed ID: 34181664
    [TBL] [Abstract][Full Text] [Related]  

  • 4. DMPP: Differentiable multi-pruner and predictor for neural network pruning.
    Li J; Zhao B; Liu D
    Neural Netw; 2022 Mar; 147():103-112. PubMed ID: 34998270
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Knowledge distillation based on multi-layer fusion features.
    Tan S; Guo R; Tang J; Jiang N; Zou J
    PLoS One; 2023; 18(8):e0285901. PubMed ID: 37639443
    [TBL] [Abstract][Full Text] [Related]  

  • 6. BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection.
    Li Y; Sun Y; Zhu N
    PLoS One; 2021; 16(9):e0257130. PubMed ID: 34506549
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Improving Differentiable Architecture Search via self-distillation.
    Zhu X; Li J; Liu Y; Wang W
    Neural Netw; 2023 Oct; 167():656-667. PubMed ID: 37717323
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Leveraging different learning styles for improved knowledge distillation in biomedical imaging.
    Niyaz U; Sambyal AS; Bathula DR
    Comput Biol Med; 2024 Jan; 168():107764. PubMed ID: 38056210
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms.
    Li L; Su W; Liu F; He M; Liang X
    Neural Process Lett; 2023 Jan; ():1-16. PubMed ID: 36619739
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Application of Entity-BERT model based on neuroscience and brain-like cognition in electronic medical record entity recognition.
    Lu W; Jiang J; Shi Y; Zhong X; Gu J; Huangfu L; Gong M
    Front Neurosci; 2023; 17():1259652. PubMed ID: 37799340
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Model Compression Based on Differentiable Network Channel Pruning.
    Zheng YJ; Chen SB; Ding CHQ; Luo B
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10203-10212. PubMed ID: 35427225
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Layerwised multimodal knowledge distillation for vision-language pretrained model.
    Wang J; Liao D; Zhang Y; Xu D; Zhang X
    Neural Netw; 2024 Jul; 175():106272. PubMed ID: 38569460
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Self-Distillation: Towards Efficient and Compact Neural Networks.
    Zhang L; Bao C; Ma K
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4388-4403. PubMed ID: 33735074
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A Question-and-Answer System to Extract Data From Free-Text Oncological Pathology Reports (CancerBERT Network): Development Study.
    Mitchell JR; Szepietowski P; Howard R; Reisman P; Jones JD; Lewis P; Fridley BL; Rollison DE
    J Med Internet Res; 2022 Mar; 24(3):e27210. PubMed ID: 35319481
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adaptive Search-and-Training for Robust and Efficient Network Pruning.
    Lu X; Dong W; Li X; Wu J; Li L; Shi G
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9325-9338. PubMed ID: 37027639
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Importance-aware adaptive dataset distillation.
    Li G; Togo R; Ogawa T; Haseyama M
    Neural Netw; 2024 Apr; 172():106154. PubMed ID: 38309137
    [TBL] [Abstract][Full Text] [Related]  

  • 17. BERT-Kcr: prediction of lysine crotonylation sites by a transfer learning method with pre-trained BERT models.
    Qiao Y; Zhu X; Gong H
    Bioinformatics; 2022 Jan; 38(3):648-654. PubMed ID: 34643684
    [TBL] [Abstract][Full Text] [Related]  

  • 18. DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator Search.
    Guan Y; Liu N; Zhao P; Che Z; Bian K; Wang Y; Tang J
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9847-9858. PubMed ID: 35380974
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Pea-KD: Parameter-efficient and accurate Knowledge Distillation on BERT.
    Cho I; Kang U
    PLoS One; 2022; 17(2):e0263592. PubMed ID: 35180258
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.
    Liu Y; Cao J; Li B; Hu W; Maybank S
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3378-3395. PubMed ID: 35731774
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.