273 related articles for article (PubMed ID: 31855748)
1. Structured pruning of recurrent neural networks through neuron selection.
Wen L; Zhang X; Bai H; Xu Z
Neural Netw; 2020 Mar; 123():134-141. PubMed ID: 31855748
[TBL] [Abstract][Full Text] [Related]
2. On the compression of neural networks using ℓ
de Resende Oliveira FD; Batista ELO; Seara R
Neural Netw; 2024 Mar; 171():343-352. PubMed ID: 38113719
[TBL] [Abstract][Full Text] [Related]
3. Block-term tensor neural networks.
Ye J; Li G; Chen D; Yang H; Zhe S; Xu Z
Neural Netw; 2020 Oct; 130():11-21. PubMed ID: 32589587
[TBL] [Abstract][Full Text] [Related]
4. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
Zang K; Wu W; Luo W
Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
[TBL] [Abstract][Full Text] [Related]
5. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
[TBL] [Abstract][Full Text] [Related]
6. Feature flow regularization: Improving structured sparsity in deep neural networks.
Wu Y; Lan Y; Zhang L; Xiang Y
Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
[TBL] [Abstract][Full Text] [Related]
7. Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.
Wu X; Lin DT; Chen R; Bhattacharyya SS
J Neural Eng; 2023 Jul; 20(4):. PubMed ID: 37429288
[No Abstract] [Full Text] [Related]
8. GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.
Niu W; Li Z; Ma X; Dong P; Zhou G; Qian X; Lin X; Wang Y; Ren B
IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6224-6239. PubMed ID: 34133272
[TBL] [Abstract][Full Text] [Related]
9. Learning lightweight super-resolution networks with weight pruning.
Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
[TBL] [Abstract][Full Text] [Related]
10. MobilePrune: Neural Network Compression via
Shao Y; Zhao K; Cao Z; Peng Z; Peng X; Li P; Wang Y; Ma J
Sensors (Basel); 2022 May; 22(11):. PubMed ID: 35684708
[TBL] [Abstract][Full Text] [Related]
11. A hybrid model based on neural networks for biomedical relation extraction.
Zhang Y; Lin H; Yang Z; Wang J; Zhang S; Sun Y; Yang L
J Biomed Inform; 2018 May; 81():83-92. PubMed ID: 29601989
[TBL] [Abstract][Full Text] [Related]
12. SGORNN: Combining scalar gates and orthogonal constraints in recurrent networks.
Taylor-Melanson W; Ferreira MD; Matwin S
Neural Netw; 2023 Feb; 159():25-33. PubMed ID: 36525915
[TBL] [Abstract][Full Text] [Related]
13. A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms.
Rapuano E; Pacini T; Fanucci L
Comput Intell Neurosci; 2022; 2022():9485933. PubMed ID: 35602644
[TBL] [Abstract][Full Text] [Related]
14. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks.
Poyatos J; Molina D; Martinez AD; Del Ser J; Herrera F
Neural Netw; 2023 Jan; 158():59-82. PubMed ID: 36442374
[TBL] [Abstract][Full Text] [Related]
15. Considerations in using recurrent neural networks to probe neural dynamics.
Kao JC
J Neurophysiol; 2019 Dec; 122(6):2504-2521. PubMed ID: 31619125
[TBL] [Abstract][Full Text] [Related]
16. Redundant feature pruning for accelerated inference in deep neural networks.
Ayinde BO; Inanc T; Zurada JM
Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
[TBL] [Abstract][Full Text] [Related]
17. Compressing Deep Networks by Neuron Agglomerative Clustering.
Wang LN; Liu W; Liu X; Zhong G; Roy PP; Dong J; Huang K
Sensors (Basel); 2020 Oct; 20(21):. PubMed ID: 33114078
[TBL] [Abstract][Full Text] [Related]
18. Weak sub-network pruning for strong and efficient neural networks.
Guo Q; Wu XJ; Kittler J; Feng Z
Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
[TBL] [Abstract][Full Text] [Related]
19. DeepCompNet: A Novel Neural Net Model Compression Architecture.
Mary Shanthi Rani M; Chitra P; Lakshmanan S; Kalpana Devi M; Sangeetha R; Nithya S
Comput Intell Neurosci; 2022; 2022():2213273. PubMed ID: 35242176
[TBL] [Abstract][Full Text] [Related]
20. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]