BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

221 related articles for article (PubMed ID: 28668660)

  • 1. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Accelerating DNN Training Through Selective Localized Learning.
    Krithivasan S; Sen S; Venkataramani S; Raghunathan A
    Front Neurosci; 2021; 15():759807. PubMed ID: 35087370
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The general inefficiency of batch training for gradient descent learning.
    Wilson DR; Martinez TR
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Learning curves for stochastic gradient descent in linear feedforward networks.
    Werfel J; Xie X; Seung HS
    Neural Comput; 2005 Dec; 17(12):2699-718. PubMed ID: 16212768
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.
    Arcos-García Á; Álvarez-García JA; Soria-Morillo LM
    Neural Netw; 2018 Mar; 99():158-165. PubMed ID: 29427842
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Ensemble Neural Networks (ENN): A gradient-free stochastic method.
    Chen Y; Chang H; Meng J; Zhang D
    Neural Netw; 2019 Feb; 110():170-185. PubMed ID: 30562650
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Block-cyclic stochastic coordinate descent for deep neural networks.
    Nakamura K; Soatto S; Hong BW
    Neural Netw; 2021 Jul; 139():348-357. PubMed ID: 33887584
    [TBL] [Abstract][Full Text] [Related]  

  • 10. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics.
    Fioresi R; Chaudhari P; Soatto S
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Estimating Systolic Blood Pressure Using Convolutional Neural Networks.
    Rastegar S; Gholamhosseini H; Lowe A; Mehdipour F; Lindén M
    Stud Health Technol Inform; 2019; 261():143-149. PubMed ID: 31156106
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Accelerating Very Deep Convolutional Networks for Classification and Detection.
    Zhang X; Zou J; He K; Sun J
    IEEE Trans Pattern Anal Mach Intell; 2016 Oct; 38(10):1943-55. PubMed ID: 26599615
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.
    Goldt S; Advani MS; Saxe AM; Krzakala F; Zdeborová L
    J Stat Mech; 2020 Dec; 2020(12):124010. PubMed ID: 34262607
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Deep Neural Networks with Multistate Activation Functions.
    Cai C; Xu Y; Ke D; Su K
    Comput Intell Neurosci; 2015; 2015():721367. PubMed ID: 26448739
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.