These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

121 related articles for article (PubMed ID: 35073273)

  • 21. SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.
    Lee CH; Fedorov I; Rao BD; Garudadri H
    Proc IEEE Int Conf Acoust Speech Signal Process; 2020 May; 2020():5410-5414. PubMed ID: 33162834
    [TBL] [Abstract][Full Text] [Related]  

  • 22. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.
    Kim J; Calhoun VD; Shim E; Lee JH
    Neuroimage; 2016 Jan; 124(Pt A):127-146. PubMed ID: 25987366
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft Masks.
    Ma M; Wang J; Yu Z
    Comput Intell Neurosci; 2022; 2022():7775419. PubMed ID: 35571691
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Learning lightweight super-resolution networks with weight pruning.
    Jiang X; Wang N; Xin J; Xia X; Yang X; Gao X
    Neural Netw; 2021 Dec; 144():21-32. PubMed ID: 34450444
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Transformed ℓ
    Ma R; Miao J; Niu L; Zhang P
    Neural Netw; 2019 Nov; 119():286-298. PubMed ID: 31499353
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Structured pruning of recurrent neural networks through neuron selection.
    Wen L; Zhang X; Bai H; Xu Z
    Neural Netw; 2020 Mar; 123():134-141. PubMed ID: 31855748
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.
    Mocanu DC; Mocanu E; Stone P; Nguyen PH; Gibescu M; Liotta A
    Nat Commun; 2018 Jun; 9(1):2383. PubMed ID: 29921910
    [TBL] [Abstract][Full Text] [Related]  

  • 29. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Neural Classifiers with Limited Connectivity and Recurrent Readouts.
    Kushnir L; Fusi S
    J Neurosci; 2018 Nov; 38(46):9900-9924. PubMed ID: 30249794
    [TBL] [Abstract][Full Text] [Related]  

  • 31. LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks.
    Tartaglione E; Bragagnolo A; Fiandrotti A; Grangetto M
    Neural Netw; 2022 Feb; 146():230-237. PubMed ID: 34906759
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Exploring Fine-Grained Sparsity in Convolutional Neural Networks for Efficient Inference.
    Wang L; Guo Y; Dong X; Wang Y; Ying X; Lin Z; An W
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4474-4493. PubMed ID: 35881599
    [TBL] [Abstract][Full Text] [Related]  

  • 33. RGP: Neural Network Pruning Through Regular Graph With Edges Swapping.
    Chen Z; Xiang J; Lu Y; Xuan Q; Wang Z; Chen G; Yang X
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37310824
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Slimming Neural Networks Using Adaptive Connectivity Scores.
    Ravi Ganesh M; Blanchard D; Corso JJ; Sekeh SY
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3794-3808. PubMed ID: 35998170
    [TBL] [Abstract][Full Text] [Related]  

  • 35. sCL-ST: Supervised Contrastive Learning With Semantic Transformations for Multiple Lead ECG Arrhythmia Classification.
    Le D; Truong S; Brijesh P; Adjeroh DA; Le N
    IEEE J Biomed Health Inform; 2023 Jun; 27(6):2818-2828. PubMed ID: 37028019
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Dynamically Optimizing Network Structure Based on Synaptic Pruning in the Brain.
    Zhao F; Zeng Y
    Front Syst Neurosci; 2021; 15():620558. PubMed ID: 34177473
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype.
    Liu C; Bellec G; Vogginger B; Kappel D; Partzsch J; Neumärker F; Höppner S; Maass W; Furber SB; Legenstein R; Mayr CG
    Front Neurosci; 2018; 12():840. PubMed ID: 30505263
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Rethinking Weight Decay for Efficient Neural Network Pruning.
    Tessier H; Gripon V; Léonardon M; Arzel M; Hannagan T; Bertrand D
    J Imaging; 2022 Mar; 8(3):. PubMed ID: 35324619
    [TBL] [Abstract][Full Text] [Related]  

  • 40. When Sparse Neural Network Meets Label Noise Learning: A Multistage Learning Framework.
    Jiang R; Yan Y; Xue JH; Wang B; Wang H
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2208-2222. PubMed ID: 35834450
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.