These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

117 related articles for article (PubMed ID: 37023168)

  • 1. CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization.
    Hu W; Che Z; Liu N; Li M; Tang J; Zhang C; Wang J
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; 35(8):11595-11607. PubMed ID: 37023168
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks.
    He Y; Dong X; Kang G; Fu Y; Yan C; Yang Y
    IEEE Trans Cybern; 2020 Aug; 50(8):3594-3604. PubMed ID: 31478883
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Discrimination-Aware Network Pruning for Deep Model Compression.
    Liu J; Zhuang B; Zhuang Z; Guo Y; Huang J; Zhu J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4035-4051. PubMed ID: 33755553
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Dynamical Conventional Neural Network Channel Pruning by Genetic Wavelet Channel Search for Image Classification.
    Chen L; Gong S; Shi X; Shang M
    Front Comput Neurosci; 2021; 15():760554. PubMed ID: 34776916
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.
    Tian G; Sun Y; Liu Y; Zeng X; Wang M; Liu Y; Zhang J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; PP():. PubMed ID: 34487502
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Exploiting Sparse Self-Representation and Particle Swarm Optimization for CNN Compression.
    Niu S; Gao K; Ma P; Gao X; Zhao H; Dong J; Chen Y; Shen D
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10266-10278. PubMed ID: 35439146
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Evolutionary Multi-Objective One-Shot Filter Pruning for Designing Lightweight Convolutional Neural Network.
    Wu T; Shi J; Zhou D; Zheng X; Li N
    Sensors (Basel); 2021 Sep; 21(17):. PubMed ID: 34502792
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Redundancy-Aware Pruning of Convolutional Neural Networks.
    Xie G
    Neural Comput; 2020 Dec; 32(12):2532-2556. PubMed ID: 33080161
    [TBL] [Abstract][Full Text] [Related]  

  • 9. DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator Search.
    Guan Y; Liu N; Zhao P; Che Z; Bian K; Wang Y; Tang J
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9847-9858. PubMed ID: 35380974
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Carrying Out CNN Channel Pruning in a White Box.
    Zhang Y; Lin M; Lin CW; Chen J; Wu Y; Tian Y; Ji R
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7946-7955. PubMed ID: 35157600
    [TBL] [Abstract][Full Text] [Related]  

  • 12. ARPruning: An automatic channel pruning based on attention map ranking.
    Yuan T; Li Z; Liu B; Tang Y; Liu Y
    Neural Netw; 2024 Jun; 174():106220. PubMed ID: 38447427
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Performance-Aware Approximation of Global Channel Pruning for Multitask CNNs.
    Ye H; Zhang B; Chen T; Fan J; Wang B
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):10267-10284. PubMed ID: 37030805
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks.
    Chen Z; Xu TB; Du C; Liu CL; He H
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):799-813. PubMed ID: 32275616
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Learning lightweight super-resolution networks with weight pruning.
    Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
    Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Redundant feature pruning for accelerated inference in deep neural networks.
    Ayinde BO; Inanc T; Zurada JM
    Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.
    Lin S; Ji R; Li Y; Deng C; Li X
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):574-588. PubMed ID: 30990448
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Random pruning: channel sparsity by expectation scaling factor.
    Sun C; Chen J; Li Y; Wang W; Ma T
    PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks.
    Liu C; Ma X; Zhan Y; Ding L; Tao D; Du B; Hu W; Mandic DP
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37368807
    [TBL] [Abstract][Full Text] [Related]  

  • 20. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.