These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

111 related articles for article (PubMed ID: 39276589)

  • 1. Intermediate-grained kernel elements pruning with structured sparsity.
    Zhang P; Zhao L; Tian C; Duan Z
    Neural Netw; 2024 Dec; 180():106708. PubMed ID: 39276589
    [TBL] [Abstract][Full Text] [Related]  

  • 2. GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.
    Niu W; Li Z; Ma X; Dong P; Zhou G; Qian X; Lin X; Wang Y; Ren B
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6224-6239. PubMed ID: 34133272
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Feature flow regularization: Improving structured sparsity in deep neural networks.
    Wu Y; Lan Y; Zhang L; Xiang Y
    Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.
    Chen L; Gong S; Shi X; Shang M
    Front Comput Neurosci; 2021; 15():760554. PubMed ID: 34776916
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Random pruning: channel sparsity by expectation scaling factor.
    Sun C; Chen J; Li Y; Wang W; Ma T
    PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
    [TBL] [Abstract][Full Text] [Related]  

  • 7. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.
    Wu X; Lin DT; Chen R; Bhattacharyya SS
    J Neural Eng; 2023 Jul; 20(4):. PubMed ID: 37429288
    [No Abstract]   [Full Text] [Related]  

  • 9. Reweighted Alternating Direction Method of Multipliers for DNN weight pruning.
    Yuan M; Du L; Jiang F; Bai J; Chen G
    Neural Netw; 2024 Nov; 179():106534. PubMed ID: 39059046
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
    Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
    [TBL] [Abstract][Full Text] [Related]  

  • 11. SOKS: Automatic Searching of the Optimal Kernel Shapes for Stripe-Wise Network Pruning.
    Liu G; Zhang K; Lv M
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9912-9924. PubMed ID: 35412989
    [TBL] [Abstract][Full Text] [Related]  

  • 12. PSE-Net: Channel pruning for Convolutional Neural Networks with parallel-subnets estimator.
    Wang S; Xie T; Liu H; Zhang X; Cheng J
    Neural Netw; 2024 Jun; 174():106263. PubMed ID: 38547802
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.
    Tian G; Sun Y; Liu Y; Zeng X; Wang M; Liu Y; Zhang J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; PP():. PubMed ID: 34487502
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Dynamically Optimizing Network Structure Based on Synaptic Pruning in the Brain.
    Zhao F; Zeng Y
    Front Syst Neurosci; 2021; 15():620558. PubMed ID: 34177473
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation.
    Sui X; Lv Q; Zhi L; Zhu B; Yang Y; Zhang Y; Tan Z
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679624
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Optimizing the Deep Neural Networks by Layer-Wise Refined Pruning and the Acceleration on FPGA.
    Li H; Yue X; Wang Z; Chai Z; Wang W; Tomiyama H; Meng L
    Comput Intell Neurosci; 2022; 2022():8039281. PubMed ID: 35694575
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Redundant feature pruning for accelerated inference in deep neural networks.
    Ayinde BO; Inanc T; Zurada JM
    Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft Masks.
    Ma M; Wang J; Yu Z
    Comput Intell Neurosci; 2022; 2022():7775419. PubMed ID: 35571691
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Learning lightweight super-resolution networks with weight pruning.
    Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
    Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
    [TBL] [Abstract][Full Text] [Related]  

  • 20. SAAF: Self-Adaptive Attention Factor-Based Taylor-Pruning on Convolutional Neural Networks.
    Lu Y; Gong M; Feng K; Liu J; Guan Z; Li H
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; PP():. PubMed ID: 39213269
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.