These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

208 related articles for article (PubMed ID: 31831449)

  • 1. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Learning Rates for Stochastic Gradient Descent With Nonconvex Objectives.
    Lei Y; Tang K
    IEEE Trans Pattern Anal Mach Intell; 2021 Dec; 43(12):4505-4511. PubMed ID: 33755555
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.
    Xu D; Zhang S; Zhang H; Mandic DP
    Neural Netw; 2021 Jul; 139():17-23. PubMed ID: 33662649
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

  • 8. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces.
    Geiersbach C; Scarinci T
    Comput Optim Appl; 2021; 78(3):705-740. PubMed ID: 33707813
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 11. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Analysis of Online Composite Mirror Descent Algorithm.
    Lei Y; Zhou DX
    Neural Comput; 2017 Mar; 29(3):825-860. PubMed ID: 28095196
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics.
    Fioresi R; Chaudhari P; Soatto S
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.
    Pu S; Olshevsky A; Paschalidis IC
    IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization.
    Sun T; Li D
    Neural Netw; 2022 May; 149():195-203. PubMed ID: 35248809
    [TBL] [Abstract][Full Text] [Related]  

  • 17. diffGrad: An Optimization Method for Convolutional Neural Networks.
    Dubey SR; Chakraborty S; Roy SK; Mukherjee S; Singh SK; Chaudhuri BB
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4500-4511. PubMed ID: 31880565
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning.
    Kawaguchi K; Huang J; Kaelbling LP
    Neural Comput; 2019 Dec; 31(12):2293-2323. PubMed ID: 31614105
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A(DP)
    Xu J; Zhang W; Wang F
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8036-8047. PubMed ID: 34449356
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.