These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

204 related articles for article (PubMed ID: 31545712)

  • 21. Layerwised multimodal knowledge distillation for vision-language pretrained model.
    Wang J; Liao D; Zhang Y; Xu D; Zhang X
    Neural Netw; 2024 Jul; 175():106272. PubMed ID: 38569460
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Lifelong Dual Generative Adversarial Nets Learning in Tandem.
    Ye F; Bors AG
    IEEE Trans Cybern; 2024 Mar; 54(3):1353-1365. PubMed ID: 37262118
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Highlight Every Step: Knowledge Distillation via Collaborative Teaching.
    Zhao H; Sun X; Dong J; Chen C; Dong Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2070-2081. PubMed ID: 32721909
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Complementary label learning based on knowledge distillation.
    Ying P; Li Z; Sun R; Xu X
    Math Biosci Eng; 2023 Sep; 20(10):17905-17918. PubMed ID: 38052542
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Adversarial Entropy Optimization for Unsupervised Domain Adaptation.
    Ma A; Li J; Lu K; Zhu L; Shen HT
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6263-6274. PubMed ID: 33939616
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Efficient knowledge distillation for liver CT segmentation using growing assistant network.
    Xu P; Kim K; Koh J; Wu D; Rim Lee Y; Young Park S; Young Tak W; Liu H; Li Q
    Phys Med Biol; 2021 Nov; 66(23):. PubMed ID: 34768246
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Diversity-driven knowledge distillation for financial trading using Deep Reinforcement Learning.
    Tsantekidis A; Passalis N; Tefas A
    Neural Netw; 2021 Aug; 140():193-202. PubMed ID: 33774425
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Feature Map Distillation of Thin Nets for Low-Resolution Object Recognition.
    Huang Z; Yang S; Zhou M; Li Z; Gong Z; Chen Y
    IEEE Trans Image Process; 2022; 31():1364-1379. PubMed ID: 35025743
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Distilling Knowledge by Mimicking Features.
    Wang GH; Ge Y; Wu J
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8183-8195. PubMed ID: 34379588
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Knowledge distillation under ideal joint classifier assumption.
    Li H; Chen X; Ditzler G; Roveda J; Li A
    Neural Netw; 2024 May; 173():106160. PubMed ID: 38330746
    [TBL] [Abstract][Full Text] [Related]  

  • 31. NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation.
    Li C; Teng X; Ding Y; Lan L
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894408
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Learning From Human Educational Wisdom: A Student-Centered Knowledge Distillation Method.
    Yang S; Yang J; Zhou M; Huang Z; Zheng WS; Yang X; Ren J
    IEEE Trans Pattern Anal Mach Intell; 2024 Jun; 46(6):4188-4205. PubMed ID: 38227419
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Frameless Graph Knowledge Distillation.
    Shi D; Shao Z; Gao J; Wang Z; Guo Y
    IEEE Trans Neural Netw Learn Syst; 2024 Sep; PP():. PubMed ID: 39231057
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Generative adversarial network based telecom fraud detection at the receiving bank.
    Zheng YJ; Zhou XH; Sheng WG; Xue Y; Chen SY
    Neural Netw; 2018 Jun; 102():78-86. PubMed ID: 29558653
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Joint Dual Feature Distillation and Gradient Progressive Pruning for BERT compression.
    Zhang Z; Lu Y; Wang T; Wei X; Wei Z
    Neural Netw; 2024 Nov; 179():106533. PubMed ID: 39079378
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Adaptive Perspective Distillation for Semantic Segmentation.
    Tian Z; Chen P; Lai X; Jiang L; Liu S; Zhao H; Yu B; Yang MC; Jia J
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1372-1387. PubMed ID: 35294341
    [TBL] [Abstract][Full Text] [Related]  

  • 37. A Framework of Composite Functional Gradient Methods for Generative Adversarial Models.
    Johnson R; Zhang T
    IEEE Trans Pattern Anal Mach Intell; 2021 Jan; 43(1):17-32. PubMed ID: 31247543
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Comprehensive learning and adaptive teaching: Distilling multi-modal knowledge for pathological glioma grading.
    Xing X; Zhu M; Chen Z; Yuan Y
    Med Image Anal; 2024 Jan; 91():102990. PubMed ID: 37864912
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Spot-Adaptive Knowledge Distillation.
    Song J; Chen Y; Ye J; Song M
    IEEE Trans Image Process; 2022; 31():3359-3370. PubMed ID: 35503832
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Adversarial Knowledge Distillation Based Biomedical Factoid Question Answering.
    Bai J; Yin C; Zhang J; Wang Y; Dong Y; Rong W; Xiong Z
    IEEE/ACM Trans Comput Biol Bioinform; 2023; 20(1):106-118. PubMed ID: 35316189
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 11.