These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

188 related articles for article (PubMed ID: 34785445)

  • 1. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.
    Liu J; Kong J; Xu D; Qi M; Lu Y
    Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Shuffling-type gradient method with bandwidth-based step sizes for finite-sum optimization.
    Liang Y; Yang Y; Liu J; Xu D
    Neural Netw; 2024 Nov; 179():106514. PubMed ID: 39024708
    [TBL] [Abstract][Full Text] [Related]  

  • 4. UAdam: Unified Adam-Type Algorithmic Framework for Nonconvex Optimization.
    Jiang Y; Liu J; Xu D; Mandic DP
    Neural Comput; 2024 Aug; 36(9):1912-1938. PubMed ID: 39106463
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Momentum Acceleration in the Individual Convergence of Nonsmooth Convex Optimization With Constraints.
    Tao W; Wu GW; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1107-1118. PubMed ID: 33290233
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.
    Xu D; Zhang S; Zhang H; Mandic DP
    Neural Netw; 2021 Jul; 139():17-23. PubMed ID: 33662649
    [TBL] [Abstract][Full Text] [Related]  

  • 7. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adaptive Restart of the Optimized Gradient Method for Convex Optimization.
    Kim D; Fessler JA
    J Optim Theory Appl; 2018 Jul; 178(1):240-263. PubMed ID: 36341472
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization.
    Jiang X; Zeng X; Sun J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5310-5321. PubMed ID: 35536804
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks.
    Fan Q; Wu W; Zurada JM
    Springerplus; 2016; 5():295. PubMed ID: 27066332
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.
    Tong Q; Liang G; Bi J
    Neurocomputing (Amst); 2022 Apr; 481():333-356. PubMed ID: 35342226
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm.
    Yuan D; Ho DWC; Xu S
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2344-2357. PubMed ID: 32614775
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A proof of convergence of the concave-convex procedure using Zangwill's theory.
    Sriperumbudur BK; Lanckriet GR
    Neural Comput; 2012 Jun; 24(6):1391-407. PubMed ID: 22364501
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Stochastic Fixed Point Optimization Algorithm for Classifier Ensemble.
    Iiduka H
    IEEE Trans Cybern; 2020 Oct; 50(10):4370-4380. PubMed ID: 31247582
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Dualityfree Methods for Stochastic Composition Optimization.
    Liu L; Liu J; Tao D
    IEEE Trans Neural Netw Learn Syst; 2019 Apr; 30(4):1205-1217. PubMed ID: 30222587
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Stochastic learning via optimizing the variational inequalities.
    Tao Q; Gao QK; Chu DJ; Wu GW
    IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1769-78. PubMed ID: 25291732
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Sparse Learning with Stochastic Composite Optimization.
    Zhang W; Zhang L; Jin Z; Jin R; Cai D; Li X; Liang R; He X
    IEEE Trans Pattern Anal Mach Intell; 2017 Jun; 39(6):1223-1236. PubMed ID: 27295652
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks.
    Wang J; Yang J; Wu W
    IEEE Trans Neural Netw; 2011 Aug; 22(8):1297-306. PubMed ID: 21813357
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.