BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

122 related articles for article (PubMed ID: 35073273)

  • 1. Automatic Sparse Connectivity Learning for Neural Networks.
    Tang Z; Luo L; Xie B; Zhu Y; Zhao R; Bi L; Lu C
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7350-7364. PubMed ID: 35073273
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Adding Before Pruning: Sparse Filter Fusion for Deep Convolutional Neural Networks via Auxiliary Attention.
    Tian G; Sun Y; Liu Y; Zeng X; Wang M; Liu Y; Zhang J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; PP():. PubMed ID: 34487502
    [TBL] [Abstract][Full Text] [Related]  

  • 4. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks.
    Poyatos J; Molina D; Martinez AD; Del Ser J; Herrera F
    Neural Netw; 2023 Jan; 158():59-82. PubMed ID: 36442374
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks.
    Fracastoro G; Fosson SM; Migliorati A; Calafiore GC
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38478446
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Lottery Jackpots Exist in Pre-Trained Models.
    Zhang Y; Lin M; Zhong Y; Chao F; Ji R
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):14990-15004. PubMed ID: 37669203
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
    Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
    [TBL] [Abstract][Full Text] [Related]  

  • 8. EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks.
    Salehinejad H; Valaee S
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5279-5292. PubMed ID: 33830931
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Feature flow regularization: Improving structured sparsity in deep neural networks.
    Wu Y; Lan Y; Zhang L; Xiang Y
    Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Extremely Sparse Networks via Binary Augmented Pruning for Fast Image Classification.
    Wang P; Li F; Li G; Cheng J
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4167-4180. PubMed ID: 34752405
    [TBL] [Abstract][Full Text] [Related]  

  • 11. PSE-Net: Channel pruning for Convolutional Neural Networks with parallel-subnets estimator.
    Wang S; Xie T; Liu H; Zhang X; Cheng J
    Neural Netw; 2024 Jun; 174():106263. PubMed ID: 38547802
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Exploiting Sparse Self-Representation and Particle Swarm Optimization for CNN Compression.
    Niu S; Gao K; Ma P; Gao X; Zhao H; Dong J; Chen Y; Shen D
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):10266-10278. PubMed ID: 35439146
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Double Sparse Deep Reinforcement Learning via Multilayer Sparse Coding and Nonconvex Regularized Pruning.
    Zhao H; Wu J; Li Z; Chen W; Zheng Z
    IEEE Trans Cybern; 2023 Feb; 53(2):765-778. PubMed ID: 35316206
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Random pruning: channel sparsity by expectation scaling factor.
    Sun C; Chen J; Li Y; Wang W; Ma T
    PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks.
    Wu T; Li X; Zhou D; Li N; Shi J
    Sensors (Basel); 2021 Jan; 21(3):. PubMed ID: 33525527
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Joint Structure and Parameter Optimization of Multiobjective Sparse Neural Network.
    Huang J; Sun W; Huang L
    Neural Comput; 2021 Mar; 33(4):1113-1143. PubMed ID: 33513329
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Efficient Neural Network Compression Inspired by Compressive Sensing.
    Gao W; Guo Y; Ma S; Li G; Kwong S
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):1965-1979. PubMed ID: 35802547
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Synaptic Pruning-Based Spiking Neural Network for Hand-Written Digits Classification.
    Faghihi F; Alashwal H; Moustafa AA
    Front Artif Intell; 2022; 5():680165. PubMed ID: 35280233
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Discrimination-Aware Network Pruning for Deep Model Compression.
    Liu J; Zhuang B; Zhuang Z; Guo Y; Huang J; Zhu J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4035-4051. PubMed ID: 33755553
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Redundant feature pruning for accelerated inference in deep neural networks.
    Ayinde BO; Inanc T; Zurada JM
    Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.