These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

124 related articles for article (PubMed ID: 33237857)

  • 1. Novel Convergence Results of Adaptive Stochastic Gradient Descents.
    Sun T; Qiao L; Liao Q; Li D
    IEEE Trans Image Process; 2021; 30():1044-1056. PubMed ID: 33237857
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Nonergodic Complexity of Proximal Inertial Gradient Descents.
    Sun T; Qiao L; Li D
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4613-4626. PubMed ID: 32997636
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Gradient Descent Learning With Floats.
    Sun T; Tang K; Li D
    IEEE Trans Cybern; 2022 Mar; 52(3):1763-1771. PubMed ID: 32525810
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization.
    Sun T; Li D
    Neural Netw; 2022 May; 149():195-203. PubMed ID: 35248809
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Algorithms for accelerated convergence of adaptive PCA.
    Chatterjee C; Kang Z; Roychowdhury VP
    IEEE Trans Neural Netw; 2000; 11(2):338-55. PubMed ID: 18249765
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.
    Pu S; Olshevsky A; Paschalidis IC
    IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.
    Tong Q; Liang G; Bi J
    Neurocomputing (Amst); 2022 Apr; 481():333-356. PubMed ID: 35342226
    [TBL] [Abstract][Full Text] [Related]  

  • 11. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
    Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D
    Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.
    Xu D; Zhang H; Liu L
    Neural Comput; 2010 Oct; 22(10):2655-77. PubMed ID: 20608871
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A Minibatch Proximal Stochastic Recursive Gradient Algorithm Using a Trust-Region-Like Scheme and Barzilai-Borwein Stepsizes.
    Yu T; Liu XW; Dai YH; Sun J
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4627-4638. PubMed ID: 33021942
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adaptive Temporal Difference Learning With Linear Function Approximation.
    Sun T; Shen H; Chen T; Li D
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):8812-8824. PubMed ID: 34648431
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A general double-proximal gradient algorithm for d.c. programming.
    Banert S; Boț RI
    Math Program; 2019; 178(1):301-326. PubMed ID: 31762494
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Convergence of gradient method with momentum for two-layer feedforward neural networks.
    Zhang N; Wu W; Zheng G
    IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Stochastic gradient Langevin dynamics with adaptive drifts.
    Kim S; Song Q; Liang F
    J Stat Comput Simul; 2022; 92(2):318-336. PubMed ID: 35559269
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Training Neural Networks by Lifted Proximal Operator Machines.
    Li J; Xiao M; Fang C; Dai Y; Xu C; Lin Z
    IEEE Trans Pattern Anal Mach Intell; 2022 Jun; 44(6):3334-3348. PubMed ID: 33382647
    [TBL] [Abstract][Full Text] [Related]  

  • 20. ANASA-a stochastic reinforcement algorithm for real-valued neural computation.
    Vasilakos AV; Loukas NH
    IEEE Trans Neural Netw; 1996; 7(4):830-42. PubMed ID: 18263479
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.