BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

271 related articles for article (PubMed ID: 35087370)

  • 1. Accelerating DNN Training Through Selective Localized Learning.
    Krithivasan S; Sen S; Venkataramani S; Raghunathan A
    Front Neurosci; 2021; 15():759807. PubMed ID: 35087370
    [TBL] [Abstract][Full Text] [Related]  

  • 2. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.
    Wang B; Ye Q
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; PP():. PubMed ID: 37030680
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates.
    Daruwalla K; Lipasti M
    Front Comput Neurosci; 2024; 18():1240348. PubMed ID: 38818385
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Enabling Training of Neural Networks on Noisy Hardware.
    Gokmen T
    Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Hebbian semi-supervised learning in a sample efficiency setting.
    Lagani G; Falchi F; Gennaro C; Amato G
    Neural Netw; 2021 Nov; 143():719-731. PubMed ID: 34438195
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Compression of Deep Neural Networks based on quantized tensor decomposition to implement on reconfigurable hardware platforms.
    Nekooei A; Safari S
    Neural Netw; 2022 Jun; 150():350-363. PubMed ID: 35344706
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Deep Learning With Asymmetric Connections and Hebbian Updates.
    Amit Y
    Front Comput Neurosci; 2019; 13():18. PubMed ID: 31019458
    [TBL] [Abstract][Full Text] [Related]  

  • 11. XGrad: Boosting Gradient-Based Optimizers With Weight Prediction.
    Guan L; Li D; Shi Y; Meng J
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; PP():. PubMed ID: 38602857
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Direct Feedback Alignment With Sparse Connections for Local Learning.
    Crafton B; Parihar A; Gebhardt E; Raychowdhury A
    Front Neurosci; 2019; 13():525. PubMed ID: 31178689
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks.
    Tang M; Yang Y; Amit Y
    Front Comput Neurosci; 2022; 16():789253. PubMed ID: 35386856
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.
    Liu J; Li B; Zhou Y; Zhao X; Zhu J; Zhang M
    Comput Intell Neurosci; 2022; 2022():9337209. PubMed ID: 35694581
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Associated Learning: Decomposing End-to-End Backpropagation Based on Autoencoders and Target Propagation.
    Kao YW; Chen HH
    Neural Comput; 2021 Jan; 33(1):174-193. PubMed ID: 33080166
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Incremental PID Controller-Based Learning Rate Scheduler for Stochastic Gradient Descent.
    Wang Z; Zhang J
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):7060-7071. PubMed ID: 36288221
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.
    Goldt S; Advani MS; Saxe AM; Krzakala F; Zdeborová L
    J Stat Mech; 2020 Dec; 2020(12):124010. PubMed ID: 34262607
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 14.