These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

112 related articles for article (PubMed ID: 37976188)

  • 1. Deep Learning Model Compression With Rank Reduction in Tensor Decomposition.
    Dai W; Fan J; Miao Y; Hwang K
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37976188
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

  • 3. ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.
    Zhong Z; Wei F; Lin Z; Zhang C
    Neural Netw; 2019 Feb; 110():104-115. PubMed ID: 30508807
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An Optimization Method for Non-IID Federated Learning Based on Deep Reinforcement Learning.
    Meng X; Li Y; Lu J; Ren X
    Sensors (Basel); 2023 Nov; 23(22):. PubMed ID: 38005610
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Nonlinear tensor train format for deep neural network compression.
    Wang D; Zhao G; Chen H; Liu Z; Deng L; Li G
    Neural Netw; 2021 Dec; 144():320-333. PubMed ID: 34547670
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Genetic CFL: Hyperparameter Optimization in Clustered Federated Learning.
    Agrawal S; Sarkar S; Alazab M; Maddikunta PKR; Gadekallu TR; Pham QV
    Comput Intell Neurosci; 2021; 2021():7156420. PubMed ID: 34840562
    [TBL] [Abstract][Full Text] [Related]  

  • 7. DSFedCon: Dynamic Sparse Federated Contrastive Learning for Data-Driven Intelligent Systems.
    Li Z; Chen J; Zhang P; Huang H; Li G
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38277248
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Compression Helps Deep Learning in Image Classification.
    Yang EH; Amer H; Jiang Y
    Entropy (Basel); 2021 Jul; 23(7):. PubMed ID: 34356422
    [TBL] [Abstract][Full Text] [Related]  

  • 9. The Role of Knowledge Creation-Oriented Convolutional Neural Network in Learning Interaction.
    Zhang H; Luo X
    Comput Intell Neurosci; 2022; 2022():6493311. PubMed ID: 35341199
    [TBL] [Abstract][Full Text] [Related]  

  • 10. HRel: Filter pruning based on High Relevance between activation maps and class labels.
    Sarvani CH; Ghorai M; Dubey SR; Basha SHS
    Neural Netw; 2022 Mar; 147():186-197. PubMed ID: 35042156
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A novel federated deep learning scheme for glioma and its subtype classification.
    Ali MB; Gu IY; Berger MS; Jakola AS
    Front Neurosci; 2023; 17():1181703. PubMed ID: 37287799
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Evaluation of Deep Neural Network Compression Methods for Edge Devices Using Weighted Score-Based Ranking Scheme.
    Ademola OA; Leier M; Petlenkov E
    Sensors (Basel); 2021 Nov; 21(22):. PubMed ID: 34833610
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks.
    Chen Z; Xu TB; Du C; Liu CL; He H
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):799-813. PubMed ID: 32275616
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Redundant feature pruning for accelerated inference in deep neural networks.
    Ayinde BO; Inanc T; Zurada JM
    Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
    [TBL] [Abstract][Full Text] [Related]  

  • 15. DeepCompNet: A Novel Neural Net Model Compression Architecture.
    Mary Shanthi Rani M; Chitra P; Lakshmanan S; Kalpana Devi M; Sangeetha R; Nithya S
    Comput Intell Neurosci; 2022; 2022():2213273. PubMed ID: 35242176
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Sign-Based Gradient Descent With Heterogeneous Data: Convergence and Byzantine Resilience.
    Jin R; Liu Y; Huang Y; He X; Wu T; Dai H
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38215315
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.
    Xu TB; Liu CL
    IEEE Trans Neural Netw Learn Syst; 2022 Jan; 33(1):257-269. PubMed ID: 33074828
    [TBL] [Abstract][Full Text] [Related]  

  • 18. DCCD: Reducing Neural Network Redundancy via Distillation.
    Liu Y; Chen J; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):10006-10017. PubMed ID: 37022254
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Compression of Deep Neural Networks based on quantized tensor decomposition to implement on reconfigurable hardware platforms.
    Nekooei A; Safari S
    Neural Netw; 2022 Jun; 150():350-363. PubMed ID: 35344706
    [TBL] [Abstract][Full Text] [Related]  

  • 20. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.