These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

193 related articles for article (PubMed ID: 38052542)

  • 1. Complementary label learning based on knowledge distillation.
    Ying P; Li Z; Sun R; Xu X
    Math Biosci Eng; 2023 Sep; 20(10):17905-17918. PubMed ID: 38052542
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Self-supervised knowledge distillation for complementary label learning.
    Liu J; Li B; Lei M; Shi Y
    Neural Netw; 2022 Nov; 155():318-327. PubMed ID: 36099664
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization.
    Fernandez-Martín C; Silva-Rodriguez J; Kiraz U; Morales S; Janssen EAM; Naranjo V
    Comput Med Imaging Graph; 2024 Mar; 112():102328. PubMed ID: 38244279
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Learning Student Network Under Universal Label Noise.
    Tang J; Jiang N; Zhu H; Tianyi Zhou J; Gong C
    IEEE Trans Image Process; 2024; 33():4363-4376. PubMed ID: 39074017
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Teacher-student complementary sample contrastive distillation.
    Bao Z; Huang Z; Gou J; Du L; Liu K; Zhou J; Chen Y
    Neural Netw; 2024 Feb; 170():176-189. PubMed ID: 37989039
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Cross-modal knowledge distillation for continuous sign language recognition.
    Gao L; Shi P; Hu L; Feng J; Zhu L; Wan L; Feng W
    Neural Netw; 2024 Nov; 179():106587. PubMed ID: 39111160
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Resolution-based distillation for efficient histology image classification.
    DiPalma J; Suriawinata AA; Tafe LJ; Torresani L; Hassanpour S
    Artif Intell Med; 2021 Sep; 119():102136. PubMed ID: 34531005
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Class-imbalanced complementary-label learning via weighted loss.
    Wei M; Zhou Y; Li Z; Xu X
    Neural Netw; 2023 Sep; 166():555-565. PubMed ID: 37586256
    [TBL] [Abstract][Full Text] [Related]  

  • 9. ComCo: Complementary supervised contrastive learning for complementary label learning.
    Jiang H; Sun Z; Tian Y
    Neural Netw; 2024 Jan; 169():44-56. PubMed ID: 37857172
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Representational Distance Learning for Deep Neural Networks.
    McClure P; Kriegeskorte N
    Front Comput Neurosci; 2016; 10():131. PubMed ID: 28082889
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.
    Xu TB; Liu CL
    IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):257-269. PubMed ID: 33074828
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Sample self-selection using dual teacher networks for pathological image classification with noisy labels.
    Han G; Guo W; Zhang H; Jin J; Gan X; Zhao X
    Comput Biol Med; 2024 May; 174():108489. PubMed ID: 38640633
    [TBL] [Abstract][Full Text] [Related]  

  • 13. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Leveraging Symbolic Knowledge Bases for Commonsense Natural Language Inference Using Pattern Theory.
    Aakur SN; Sarkar S
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13185-13202. PubMed ID: 37339033
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Knowledge Distillation Meets Label Noise Learning: Ambiguity-Guided Mutual Label Refinery.
    Jiang R; Yan Y; Xue JH; Chen S; Wang N; Wang H
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019631
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.
    Marini N; Otálora S; Müller H; Atzori M
    Med Image Anal; 2021 Oct; 73():102165. PubMed ID: 34303169
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Generalized Knowledge Distillation via Relationship Matching.
    Ye HJ; Lu S; Zhan DC
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1817-1834. PubMed ID: 35298374
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A semi supervised approach to Arabic aspect category detection using Bert and teacher-student model.
    Almasri M; Al-Malki N; Alotaibi R
    PeerJ Comput Sci; 2023; 9():e1425. PubMed ID: 37346563
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Densely Distilled Flow-Based Knowledge Transfer in Teacher-Student Framework for Image Classification.
    Bae JH; Yeo D; Yim J; Kim NS; Pyo CS; Kim J
    IEEE Trans Image Process; 2020 Apr; ():. PubMed ID: 32286978
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.