These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

90 related articles for article (PubMed ID: 32997636)

  • 1. Nonergodic Complexity of Proximal Inertial Gradient Descents.
    Sun T; Qiao L; Li D
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4613-4626. PubMed ID: 32997636
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Novel Convergence Results of Adaptive Stochastic Gradient Descents.
    Sun T; Qiao L; Liao Q; Li D
    IEEE Trans Image Process; 2021; 30():1044-1056. PubMed ID: 33237857
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Inertial proximal alternating minimization for nonconvex and nonsmooth problems.
    Zhang Y; He S
    J Inequal Appl; 2017; 2017(1):232. PubMed ID: 29026279
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Gradient Descent Learning With Floats.
    Sun T; Tang K; Li D
    IEEE Trans Cybern; 2022 Mar; 52(3):1763-1771. PubMed ID: 32525810
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A general double-proximal gradient algorithm for d.c. programming.
    Banert S; Boț RI
    Math Program; 2019; 178(1):301-326. PubMed ID: 31762494
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization.
    Sun T; Li D
    Neural Netw; 2022 May; 149():195-203. PubMed ID: 35248809
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Scalable Proximal Jacobian Iteration Method With Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations.
    Zhang H; Qian J; Gao J; Yang J; Xu C
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2825-2839. PubMed ID: 30668503
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Inertial Nonconvex Alternating Minimizations for the Image Deblurring.
    Sun T; Barrio R; Rodriguez M; Jiang H
    IEEE Trans Image Process; 2019 Dec; 28(12):6211-6224. PubMed ID: 31265396
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.
    Tong Q; Liang G; Bi J
    Neurocomputing (Amst); 2022 Apr; 481():333-356. PubMed ID: 35342226
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Minibatch Proximal Stochastic Recursive Gradient Algorithm Using a Trust-Region-Like Scheme and Barzilai-Borwein Stepsizes.
    Yu T; Liu XW; Dai YH; Sun J
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4627-4638. PubMed ID: 33021942
    [TBL] [Abstract][Full Text] [Related]  

  • 13. An incremental mirror descent subgradient algorithm with random sweeping and proximal step.
    Boţ RI; Böhm A
    Optimization; 2019; 68(1):33-50. PubMed ID: 30828224
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization.
    Zhang H; Gong C; Qian J; Zhang B; Xu C; Yang J
    IEEE Trans Neural Netw Learn Syst; 2019 Oct; 30(10):2916-2925. PubMed ID: 30892254
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A Hybrid Stochastic-Deterministic Minibatch Proximal Gradient Method for Efficient Optimization and Generalization.
    Zhou P; Yuan X; Lin Z; Hoi S
    IEEE Trans Pattern Anal Mach Intell; 2021 Jun; PP():. PubMed ID: 34101583
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A second-order dynamical approach with variable damping to nonconvex smooth minimization.
    Boţ RI; Csetnek ER; László SC
    Appl Anal; 2020; 99(3):361-378. PubMed ID: 32256253
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Scalable estimation strategies based on stochastic approximations: Classical results and new insights.
    Airoldi EM; Toulis P
    Stat Comput; 2015 Jul; 25(4):781-795. PubMed ID: 26139959
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems.
    Zhu J; Wang L; Spall JC
    IEEE Trans Neural Netw Learn Syst; 2020 Aug; 31(8):3087-3099. PubMed ID: 31536020
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Faster First-Order Methods for Stochastic Non-Convex Optimization on Riemannian Manifolds.
    Zhou P; Yuan XT; Yan S; Feng J
    IEEE Trans Pattern Anal Mach Intell; 2021 Feb; 43(2):459-472. PubMed ID: 31398110
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.