These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

156 related articles for article (PubMed ID: 33136545)

  • 1. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Discrimination-Aware Network Pruning for Deep Model Compression.
    Liu J; Zhuang B; Zhuang Z; Guo Y; Huang J; Zhu J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4035-4051. PubMed ID: 33755553
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.
    Lin S; Ji R; Li Y; Deng C; Li X
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):574-588. PubMed ID: 30990448
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.
    Chen L; Gong S; Shi X; Shang M
    Front Comput Neurosci; 2021; 15():760554. PubMed ID: 34776916
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.
    Tian G; Sun Y; Liu Y; Zeng X; Wang M; Liu Y; Zhang J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; PP():. PubMed ID: 34487502
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Perturbation of deep autoencoder weights for model compression and classification of tabular data.
    Abrar S; Samad MD
    Neural Netw; 2022 Dec; 156():160-169. PubMed ID: 36270199
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks.
    Wu T; Li X; Zhou D; Li N; Shi J
    Sensors (Basel); 2021 Jan; 21(3):. PubMed ID: 33525527
    [TBL] [Abstract][Full Text] [Related]  

  • 9. GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.
    Niu W; Li Z; Ma X; Dong P; Zhou G; Qian X; Lin X; Wang Y; Ren B
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6224-6239. PubMed ID: 34133272
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks.
    Chen Z; Xu TB; Du C; Liu CL; He H
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):799-813. PubMed ID: 32275616
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Feature flow regularization: Improving structured sparsity in deep neural networks.
    Wu Y; Lan Y; Zhang L; Xiang Y
    Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
    [TBL] [Abstract][Full Text] [Related]  

  • 12. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 13. ACSL: Adaptive correlation-driven sparsity learning for deep neural network compression.
    He W; Wu M; Lam SK
    Neural Netw; 2021 Dec; 144():465-477. PubMed ID: 34600219
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Auxiliary Pneumonia Classification Algorithm Based on Pruning Compression.
    Yang CP; Zhu JQ; Yan T; Su QL; Zheng LX
    Comput Math Methods Med; 2022; 2022():8415187. PubMed ID: 35898478
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
    Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Structural Compression of Convolutional Neural Networks with Applications in Interpretability.
    Abbasi-Asl R; Yu B
    Front Big Data; 2021; 4():704182. PubMed ID: 34514381
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Flexible Coding Scheme Based on Block Krylov Subspace Approximation for Light Field Displays with Stacked Multiplicative Layers.
    Ravishankar J; Sharma M; Gopalakrishnan P
    Sensors (Basel); 2021 Jul; 21(13):. PubMed ID: 34283132
    [TBL] [Abstract][Full Text] [Related]  

  • 18. CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization.
    Hu W; Che Z; Liu N; Li M; Tang J; Zhang C; Wang J
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; 35(8):11595-11607. PubMed ID: 37023168
    [TBL] [Abstract][Full Text] [Related]  

  • 19. SOKS: Automatic Searching of the Optimal Kernel Shapes for Stripe-Wise Network Pruning.
    Liu G; Zhang K; Lv M
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9912-9924. PubMed ID: 35412989
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.
    Liu Y; Cao J; Li B; Hu W; Maybank S
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3378-3395. PubMed ID: 35731774
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.