These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

155 related articles for article (PubMed ID: 34133272)

  • 1. GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.
    Niu W; Li Z; Ma X; Dong P; Zhou G; Qian X; Lin X; Wang Y; Ren B
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6224-6239. PubMed ID: 34133272
    [TBL] [Abstract][Full Text] [Related]  

  • 2. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?
    Ma X; Lin S; Ye S; He Z; Zhang L; Yuan G; Tan SH; Li Z; Fan D; Qian X; Lin X; Ma K; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4930-4944. PubMed ID: 33735086
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.
    Lin S; Ji R; Li Y; Deng C; Li X
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):574-588. PubMed ID: 30990448
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.
    Kulkarni U; S M M; Gurlahosur SV; Bhogar G
    Neural Netw; 2021 Apr; 136():28-39. PubMed ID: 33429131
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Feature flow regularization: Improving structured sparsity in deep neural networks.
    Wu Y; Lan Y; Zhang L; Xiang Y
    Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Exploring Fine-Grained Sparsity in Convolutional Neural Networks for Efficient Inference.
    Wang L; Guo Y; Dong X; Wang Y; Ying X; Lin Z; An W
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4474-4493. PubMed ID: 35881599
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.
    Wu X; Lin DT; Chen R; Bhattacharyya SS
    J Neural Eng; 2023 Jul; 20(4):. PubMed ID: 37429288
    [No Abstract]   [Full Text] [Related]  

  • 9. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
    Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Resource-constrained FPGA/DNN co-design.
    Zhang Z; Kouzani AZ
    Neural Comput Appl; 2021; 33(21):14741-14751. PubMed ID: 34025038
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Edge deep learning for neural implants: a case study of seizure detection and prediction.
    Liu X; Richardson AG
    J Neural Eng; 2021 Apr; 18(4):. PubMed ID: 33794507
    [No Abstract]   [Full Text] [Related]  

  • 13. CRESPR: Modular sparsification of DNNs to improve pruning performance and model interpretability.
    Kang T; Ding W; Chen P
    Neural Netw; 2024 Apr; 172():106067. PubMed ID: 38199151
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Structured pruning of recurrent neural networks through neuron selection.
    Wen L; Zhang X; Bai H; Xu Z
    Neural Netw; 2020 Mar; 123():134-141. PubMed ID: 31855748
    [TBL] [Abstract][Full Text] [Related]  

  • 16. LAP: Latency-aware automated pruning with dynamic-based filter selection.
    Chen Z; Liu C; Yang W; Li K; Li K
    Neural Netw; 2022 Aug; 152():407-418. PubMed ID: 35609502
    [TBL] [Abstract][Full Text] [Related]  

  • 17. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA.
    Li H; Yue X; Wang Z; Chai Z; Wang W; Tomiyama H; Meng L
    Comput Intell Neurosci; 2022; 2022():8039281. PubMed ID: 35694575
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Learning lightweight super-resolution networks with weight pruning.
    Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
    Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Spartus: A 9.4 TOp/s FPGA-Based LSTM Accelerator Exploiting Spatio-Temporal Sparsity.
    Gao C; Delbruck T; Liu SC
    IEEE Trans Neural Netw Learn Syst; 2022 Jun; PP():. PubMed ID: 35687629
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.