These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

128 related articles for article (PubMed ID: 38544077)

  • 1. Enhancing Few-Shot Learning in Lightweight Models via Dual-Faceted Knowledge Distillation.
    Zhou B; Cheng T; Zhao J; Yan C; Jiang L; Zhang X; Gu J
    Sensors (Basel); 2024 Mar; 24(6):. PubMed ID: 38544077
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Bridging the Gap Between Few-Shot and Many-Shot Learning via Distribution Calibration.
    Yang S; Wu S; Liu T; Xu M
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9830-9843. PubMed ID: 34860647
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Balancing Feature Alignment and Uniformity for Few-Shot Classification.
    Yu Y; Zhang D; Ji Z; Li X; Han J; Zhang Z
    IEEE Trans Image Process; 2023 Nov; PP():. PubMed ID: 37922165
    [TBL] [Abstract][Full Text] [Related]  

  • 4. SSD-KD: A self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images.
    Wang Y; Wang Y; Cai J; Lee TK; Miao C; Wang ZJ
    Med Image Anal; 2023 Feb; 84():102693. PubMed ID: 36462373
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Dual Distillation Discriminator Networks for Domain Adaptive Few-Shot Learning.
    Liu X; Ji Z; Pang Y; Han Z
    Neural Netw; 2023 Aug; 165():625-633. PubMed ID: 37364472
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Hierarchical Knowledge Propagation and Distillation for Few-Shot Learning.
    Zhou C; Wang H; Zhou S; Yu Z; Bandara D; Bu J
    Neural Netw; 2023 Oct; 167():615-625. PubMed ID: 37713767
    [TBL] [Abstract][Full Text] [Related]  

  • 7. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Cosine similarity-guided knowledge distillation for robust object detectors.
    Park S; Kang D; Paik J
    Sci Rep; 2024 Aug; 14(1):18888. PubMed ID: 39143179
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Disentangled Feature Representation for Few-Shot Image Classification.
    Cheng H; Wang Y; Li H; Kot AC; Wen B
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; 35(8):10422-10435. PubMed ID: 37027772
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Towards efficient network compression via Few-Shot Slimming.
    He J; Ding Y; Zhang M; Li D
    Neural Netw; 2022 Mar; 147():113-125. PubMed ID: 34999388
    [TBL] [Abstract][Full Text] [Related]  

  • 11. MMT: Cross Domain Few-Shot Learning via Meta-Memory Transfer.
    Wang W; Duan L; Wang Y; Fan J; Zhang Z
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15018-15035. PubMed ID: 37594873
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

  • 13. How to Trust Unlabeled Data? Instance Credibility Inference for Few-Shot Learning.
    Wang Y; Zhang L; Yao Y; Fu Y
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6240-6253. PubMed ID: 34081579
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration.
    Zhang W; Gu X
    Entropy (Basel); 2023 May; 25(5):. PubMed ID: 37238532
    [TBL] [Abstract][Full Text] [Related]  

  • 15. FSCC: Few-Shot Learning for Macromolecule Classification Based on Contrastive Learning and Distribution Calibration in Cryo-Electron Tomography.
    Gao S; Zeng X; Xu M; Zhang F
    Front Mol Biosci; 2022; 9():931949. PubMed ID: 35865006
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach.
    Kuldashboy A; Umirzakova S; Allaberdiev S; Nasimov R; Abdusalomov A; Cho YI
    Heliyon; 2024 Jul; 10(14):e34376. PubMed ID: 39113984
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Meta-Transfer Learning Through Hard Tasks.
    Sun Q; Liu Y; Chen Z; Chua TS; Schiele B
    IEEE Trans Pattern Anal Mach Intell; 2022 Mar; 44(3):1443-1456. PubMed ID: 32822293
    [TBL] [Abstract][Full Text] [Related]  

  • 18. NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation.
    Li C; Teng X; Ding Y; Lan L
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894408
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.
    Zhang Y; Chen Z; Yang X
    Comput Biol Med; 2024 Mar; 170():108088. PubMed ID: 38320339
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Unsupervised Few-Shot Feature Learning via Self-Supervised Training.
    Ji Z; Zou X; Huang T; Wu S
    Front Comput Neurosci; 2020; 14():83. PubMed ID: 33178000
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.