These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

128 related articles for article (PubMed ID: 33755555)

  • 1. Learning Rates for Stochastic Gradient Descent With Nonconvex Objectives.
    Lei Y; Tang K
    IEEE Trans Pattern Anal Mach Intell; 2021 Dec; 43(12):4505-4511. PubMed ID: 33755555
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 3. OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS.
    Wang Z; Liu H; Zhang T
    Ann Stat; 2014; 42(6):2164-2201. PubMed ID: 25544785
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Faster Stochastic Quasi-Newton Methods.
    Zhang Q; Huang F; Deng C; Huang H
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4388-4397. PubMed ID: 33667166
    [TBL] [Abstract][Full Text] [Related]  

  • 8. The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima.
    Feng Y; Tu Y
    Proc Natl Acad Sci U S A; 2021 Mar; 118(9):. PubMed ID: 33619091
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval.
    Chen Y; Chi Y; Fan J; Ma C
    Math Program; 2019 Jul; 176(1-2):5-37. PubMed ID: 33833473
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Refined rademacher chaos complexity bounds with applications to the multikernel learning problem.
    Lei Y; Ding L
    Neural Comput; 2014 Apr; 26(4):739-60. PubMed ID: 24479777
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Stochastic Mirror Descent on Overparameterized Nonlinear Models.
    Azizan N; Lale S; Hassibi B
    IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7717-7727. PubMed ID: 34270431
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Optimization and Learning With Randomly Compressed Gradient Updates.
    Huang Z; Lei Y; Kabán A
    Neural Comput; 2023 Jun; 35(7):1234-1287. PubMed ID: 37187168
    [TBL] [Abstract][Full Text] [Related]  

  • 14. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Value iteration for streaming data on a continuous space with gradient method in an RKHS.
    Liu J; Xu W; Wang Y; Lian H
    Neural Netw; 2023 Sep; 166():437-445. PubMed ID: 37566954
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stochastic Optimization for Nonconvex Problem With Inexact Hessian Matrix, Gradient, and Function.
    Liu L; Liu X; Hsieh CJ; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38039170
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Unified Analysis of AdaGrad With Weighted Aggregation and Momentum Acceleration.
    Shen L; Chen C; Zou F; Jie Z; Sun J; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37310828
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics.
    Fioresi R; Chaudhari P; Soatto S
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Parameter inference for discretely observed stochastic kinetic models using stochastic gradient descent.
    Wang Y; Christley S; Mjolsness E; Xie X
    BMC Syst Biol; 2010 Jul; 4():99. PubMed ID: 20663171
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.