174 related articles for article (PubMed ID: 30216142)
1. Cross-Entropy Pruning for Compressing Convolutional Neural Networks.
Bao R; Yuan X; Chen Z; Ma R
Neural Comput; 2018 Nov; 30(11):3128-3149. PubMed ID: 30216142
[TBL] [Abstract][Full Text] [Related]
2. HRel: Filter pruning based on High Relevance between activation maps and class labels.
Sarvani CH; Ghorai M; Dubey SR; Basha SHS
Neural Netw; 2022 Mar; 147():186-197. PubMed ID: 35042156
[TBL] [Abstract][Full Text] [Related]
3. Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.
He Y; Dong X; Kang G; Fu Y; Yan C; Yang Y
IEEE Trans Cybern; 2020 Aug; 50(8):3594-3604. PubMed ID: 31478883
[TBL] [Abstract][Full Text] [Related]
4. Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.
Lin S; Ji R; Li Y; Deng C; Li X
IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):574-588. PubMed ID: 30990448
[TBL] [Abstract][Full Text] [Related]
5. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks.
Wu T; Li X; Zhou D; Li N; Shi J
Sensors (Basel); 2021 Jan; 21(3):. PubMed ID: 33525527
[TBL] [Abstract][Full Text] [Related]
6. Hierarchical Pruning for Simplification of Convolutional Neural Networks in Diabetic Retinopathy Classification.
Hajabdollahi M; Esfandiarpoor R; Najarian K; Karimi N; Samavi S; Reza Soroushmehr SM
Annu Int Conf IEEE Eng Med Biol Soc; 2019 Jul; 2019():970-973. PubMed ID: 31946055
[TBL] [Abstract][Full Text] [Related]
7. Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network.
Wu T; Shi J; Zhou D; Zheng X; Li N
Sensors (Basel); 2021 Sep; 21(17):. PubMed ID: 34502792
[TBL] [Abstract][Full Text] [Related]
8. CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics.
Li G; Wang J; Shen HW; Chen K; Shan G; Lu Z
IEEE Trans Vis Comput Graph; 2021 Feb; 27(2):1364-1373. PubMed ID: 33048744
[TBL] [Abstract][Full Text] [Related]
9. Dynamically Optimizing Network Structure Based on Synaptic Pruning in the Brain.
Zhao F; Zeng Y
Front Syst Neurosci; 2021; 15():620558. PubMed ID: 34177473
[TBL] [Abstract][Full Text] [Related]
10. Where to Prune: Using LSTM to Guide Data-Dependent Soft Pruning.
Ding G; Zhang S; Jia Z; Zhong J; Han J
IEEE Trans Image Process; 2021; 30():293-304. PubMed ID: 33186105
[TBL] [Abstract][Full Text] [Related]
11. Weak sub-network pruning for strong and efficient neural networks.
Guo Q; Wu XJ; Kittler J; Feng Z
Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
[TBL] [Abstract][Full Text] [Related]
12. Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations.
Chen S; Zhao Q
IEEE Trans Pattern Anal Mach Intell; 2019 Dec; 41(12):3048-3056. PubMed ID: 30296213
[TBL] [Abstract][Full Text] [Related]
13. Random pruning: channel sparsity by expectation scaling factor.
Sun C; Chen J; Li Y; Wang W; Ma T
PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
[TBL] [Abstract][Full Text] [Related]
14. Implementation of Lightweight Convolutional Neural Networks via Layer-Wise Differentiable Compression.
Diao H; Hao Y; Xu S; Li G
Sensors (Basel); 2021 May; 21(10):. PubMed ID: 34065680
[TBL] [Abstract][Full Text] [Related]
15. EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks.
Salehinejad H; Valaee S
IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5279-5292. PubMed ID: 33830931
[TBL] [Abstract][Full Text] [Related]
16. Redundancy-Aware Pruning of Convolutional Neural Networks.
Xie G
Neural Comput; 2020 Dec; 32(12):2532-2556. PubMed ID: 33080161
[TBL] [Abstract][Full Text] [Related]
17. Redundant feature pruning for accelerated inference in deep neural networks.
Ayinde BO; Inanc T; Zurada JM
Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
[TBL] [Abstract][Full Text] [Related]
18. ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions.
Gao H; Wang Z; Cai L; Ji S
IEEE Trans Pattern Anal Mach Intell; 2021 Aug; 43(8):2570-2581. PubMed ID: 32091991
[TBL] [Abstract][Full Text] [Related]
19. Learning lightweight super-resolution networks with weight pruning.
Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
[TBL] [Abstract][Full Text] [Related]
20. Kernel-wise difference minimization for convolutional neural network compression in metaverse.
Chang YT
Front Big Data; 2023; 6():1200382. PubMed ID: 37600500
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]