These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

233 related articles for article (PubMed ID: 33267071)

  • 21. Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.
    Sengupta A; Ye Y; Wang R; Liu C; Roy K
    Front Neurosci; 2019; 13():95. PubMed ID: 30899212
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA.
    Li H; Yue X; Wang Z; Chai Z; Wang W; Tomiyama H; Meng L
    Comput Intell Neurosci; 2022; 2022():8039281. PubMed ID: 35694575
    [TBL] [Abstract][Full Text] [Related]  

  • 23. KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning.
    Gao Z; Xu K; Ding B; Wang H
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441184
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Compressing recognition network of cotton disease with spot-adaptive knowledge distillation.
    Zhang X; Feng Q; Zhu D; Liang X; Zhang J
    Front Plant Sci; 2024; 15():1433543. PubMed ID: 39391779
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Uncertainty-Aware Knowledge Distillation for Collision Identification of Collaborative Robots.
    Kwon W; Jin Y; Lee SJ
    Sensors (Basel); 2021 Oct; 21(19):. PubMed ID: 34640993
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms.
    Li L; Su W; Liu F; He M; Liang X
    Neural Process Lett; 2023 Jan; ():1-16. PubMed ID: 36619739
    [TBL] [Abstract][Full Text] [Related]  

  • 27. FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation.
    Tang J; Ding X; Hu D; Guo B; Shen Y; Ma P; Jiang Y
    Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514811
    [TBL] [Abstract][Full Text] [Related]  

  • 28. A 3DCNN-Based Knowledge Distillation Framework for Human Activity Recognition.
    Ullah H; Munir A
    J Imaging; 2023 Apr; 9(4):. PubMed ID: 37103233
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Knowledge Distillation for Semantic Segmentation Using Channel and Spatial Correlations and Adaptive Cross Entropy.
    Park S; Heo YS
    Sensors (Basel); 2020 Aug; 20(16):. PubMed ID: 32824456
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Self-Distillation for Randomized Neural Networks.
    Hu M; Gao R; Suganthan PN
    IEEE Trans Neural Netw Learn Syst; 2024 Nov; 35(11):16119-16128. PubMed ID: 37585327
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Resolution-Aware Knowledge Distillation for Efficient Inference.
    Feng Z; Lai J; Xie X
    IEEE Trans Image Process; 2021; 30():6985-6996. PubMed ID: 34347598
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization.
    Yang Z; Zhang S; Li R; Li C; Wang M; Wang D; Zhang M
    Sensors (Basel); 2021 Jan; 21(2):. PubMed ID: 33435143
    [TBL] [Abstract][Full Text] [Related]  

  • 33. The effects of physics-based data augmentation on the generalizability of deep neural networks: Demonstration on nodule false-positive reduction.
    Omigbodun AO; Noo F; McNitt-Gray M; Hsu W; Hsieh SS
    Med Phys; 2019 Oct; 46(10):4563-4574. PubMed ID: 31396974
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Cervical Cell Image Classification-Based Knowledge Distillation.
    Gao W; Xu C; Li G; Zhang Y; Bai N; Li M
    Biomimetics (Basel); 2022 Nov; 7(4):. PubMed ID: 36412723
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Collaborative Knowledge Distillation via Multiknowledge Transfer.
    Gou J; Sun L; Yu B; Du L; Ramamohanarao K; Tao D
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6718-6730. PubMed ID: 36264723
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Unsupervised Anomaly Detection with Distillated Teacher-Student Network Ensemble.
    Xiao Q; Wang J; Lin Y; Gongsa W; Hu G; Li M; Wang F
    Entropy (Basel); 2021 Feb; 23(2):. PubMed ID: 33561954
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Spot-Adaptive Knowledge Distillation.
    Song J; Chen Y; Ye J; Song M
    IEEE Trans Image Process; 2022; 31():3359-3370. PubMed ID: 35503832
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Compressing deep graph convolution network with multi-staged knowledge distillation.
    Kim J; Jung J; Kang U
    PLoS One; 2021; 16(8):e0256187. PubMed ID: 34388224
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.
    Liu Y; Cao J; Li B; Hu W; Maybank S
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3378-3395. PubMed ID: 35731774
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Data-Efficient Sensor Upgrade Path Using Knowledge Distillation.
    Van Molle P; De Boom C; Verbelen T; Vankeirsbilck B; De Vylder J; Diricx B; Simoens P; Dhoedt B
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640843
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 12.