These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
131 related articles for article (PubMed ID: 33290233)
1. Momentum Acceleration in the Individual Convergence of Nonsmooth Convex Optimization With Constraints. Tao W; Wu GW; Tao Q IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1107-1118. PubMed ID: 33290233 [TBL] [Abstract][Full Text] [Related]
2. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization. Tao W; Pan Z; Wu G; Tao Q IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139 [TBL] [Abstract][Full Text] [Related]
3. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence. Tao W; Pan Z; Wu G; Tao Q IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303 [TBL] [Abstract][Full Text] [Related]
4. Stochastic momentum methods for non-convex learning without bounded assumptions. Liang Y; Liu J; Xu D Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864 [TBL] [Abstract][Full Text] [Related]
5. Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Kim D; Fessler JA J Optim Theory Appl; 2018 Jul; 178(1):240-263. PubMed ID: 36341472 [TBL] [Abstract][Full Text] [Related]
6. A Unified Analysis of AdaGrad With Weighted Aggregation and Momentum Acceleration. Shen L; Chen C; Zou F; Jie Z; Sun J; Liu W IEEE Trans Neural Netw Learn Syst; 2024 Oct; 35(10):14482-14490. PubMed ID: 37310828 [TBL] [Abstract][Full Text] [Related]
7. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization. Liu J; Kong J; Xu D; Qi M; Lu Y Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445 [TBL] [Abstract][Full Text] [Related]
8. Convergence Analysis of Distributed Gradient Descent Algorithms With One and Two Momentum Terms. Liu B; Chai L; Yi J IEEE Trans Cybern; 2024 Mar; 54(3):1511-1522. PubMed ID: 36355726 [TBL] [Abstract][Full Text] [Related]
9. Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization. Jiang X; Zeng X; Sun J; Chen J; Xie L IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):4082-4096. PubMed ID: 36070265 [TBL] [Abstract][Full Text] [Related]
10. NALA: a Nesterov accelerated look-ahead optimizer for deep learning. Zuo X; Li HY; Gao S; Zhang P; Du WR PeerJ Comput Sci; 2024; 10():e2167. PubMed ID: 38983239 [TBL] [Abstract][Full Text] [Related]
11. Accelerated statistical reconstruction for C-arm cone-beam CT using Nesterov's method. Wang AS; Stayman JW; Otake Y; Vogt S; Kleinszig G; Siewerdsen JH Med Phys; 2015 May; 42(5):2699-708. PubMed ID: 25979068 [TBL] [Abstract][Full Text] [Related]
12. An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function. Boţ RI; Csetnek ER; Sedlmayer M Comput Optim Appl; 2023; 86(3):925-966. PubMed ID: 37969869 [TBL] [Abstract][Full Text] [Related]
13. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks. Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247 [TBL] [Abstract][Full Text] [Related]
14. Novel projection neurodynamic approaches for constrained convex optimization. Zhao Y; Liao X; He X Neural Netw; 2022 Jun; 150():336-349. PubMed ID: 35344705 [TBL] [Abstract][Full Text] [Related]
15. Stochastic learning via optimizing the variational inequalities. Tao Q; Gao QK; Chu DJ; Wu GW IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1769-78. PubMed ID: 25291732 [TBL] [Abstract][Full Text] [Related]
16. Continuation of Nesterov's Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging. Hadj-Selem F; Lofstedt T; Dohmatob E; Frouin V; Dubois M; Guillemot V; Duchesnay E IEEE Trans Med Imaging; 2018 Nov; 37(11):2403-2413. PubMed ID: 29993684 [TBL] [Abstract][Full Text] [Related]
17. Scalable Proximal Jacobian Iteration Method With Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations. Zhang H; Qian J; Gao J; Yang J; Xu C IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2825-2839. PubMed ID: 30668503 [TBL] [Abstract][Full Text] [Related]
18. Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates. Boţ RI; Csetnek ER; Nguyen DK Math Program; 2023; 200(1):147-197. PubMed ID: 37215306 [TBL] [Abstract][Full Text] [Related]
19. Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise. Yu Z; Ho DWC; Yuan D; Liu J IEEE Trans Cybern; 2023 Jun; 53(6):3561-3573. PubMed ID: 34818207 [TBL] [Abstract][Full Text] [Related]
20. On the Convergence Analysis of the Optimized Gradient Method. Kim D; Fessler JA J Optim Theory Appl; 2017 Jan; 172(1):187-205. PubMed ID: 28461707 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]