260 related articles for article (PubMed ID: 31484139)
1. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
Tao W; Pan Z; Wu G; Tao Q
IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
[TBL] [Abstract][Full Text] [Related]
2. Momentum Acceleration in the Individual Convergence of Nonsmooth Convex Optimization With Constraints.
Tao W; Wu GW; Tao Q
IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1107-1118. PubMed ID: 33290233
[TBL] [Abstract][Full Text] [Related]
3. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence.
Tao W; Pan Z; Wu G; Tao Q
IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303
[TBL] [Abstract][Full Text] [Related]
4. Stochastic learning via optimizing the variational inequalities.
Tao Q; Gao QK; Chu DJ; Wu GW
IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1769-78. PubMed ID: 25291732
[TBL] [Abstract][Full Text] [Related]
5. Adaptive Restart of the Optimized Gradient Method for Convex Optimization.
Kim D; Fessler JA
J Optim Theory Appl; 2018 Jul; 178(1):240-263. PubMed ID: 36341472
[TBL] [Abstract][Full Text] [Related]
6. Continuation of Nesterov's Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging.
Hadj-Selem F; Lofstedt T; Dohmatob E; Frouin V; Dubois M; Guillemot V; Duchesnay E
IEEE Trans Med Imaging; 2018 Nov; 37(11):2403-2413. PubMed ID: 29993684
[TBL] [Abstract][Full Text] [Related]
7. Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.
Hishinuma K; Iiduka H
Front Robot AI; 2019; 6():77. PubMed ID: 33501092
[TBL] [Abstract][Full Text] [Related]
8. Optimized first-order methods for smooth convex minimization.
Kim D; Fessler JA
Math Program; 2016 Sep; 159(1):81-107. PubMed ID: 27765996
[TBL] [Abstract][Full Text] [Related]
9. Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates.
Boţ RI; Csetnek ER; Nguyen DK
Math Program; 2023; 200(1):147-197. PubMed ID: 37215306
[TBL] [Abstract][Full Text] [Related]
10. An accelerated minimax algorithm for convex-concave saddle point problems with nonsmooth coupling function.
Boţ RI; Csetnek ER; Sedlmayer M
Comput Optim Appl; 2023; 86(3):925-966. PubMed ID: 37969869
[TBL] [Abstract][Full Text] [Related]
11. Weighted SGD for ℓ
Yang J; Chow YL; Ré C; Mahoney MW
Proc Annu ACM SIAM Symp Discret Algorithms; 2016 Jan; 2016():558-569. PubMed ID: 29782626
[TBL] [Abstract][Full Text] [Related]
12. Subgradient ellipsoid method for nonsmooth convex problems.
Rodomanov A; Nesterov Y
Math Program; 2023; 199(1-2):305-341. PubMed ID: 37155414
[TBL] [Abstract][Full Text] [Related]
13. Accelerated statistical reconstruction for C-arm cone-beam CT using Nesterov's method.
Wang AS; Stayman JW; Otake Y; Vogt S; Kleinszig G; Siewerdsen JH
Med Phys; 2015 May; 42(5):2699-708. PubMed ID: 25979068
[TBL] [Abstract][Full Text] [Related]
14. Distributed Stochastic Proximal Algorithm With Random Reshuffling for Nonsmooth Finite-Sum Optimization.
Jiang X; Zeng X; Sun J; Chen J; Xie L
IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):4082-4096. PubMed ID: 36070265
[TBL] [Abstract][Full Text] [Related]
15. Scalable Proximal Jacobian Iteration Method With Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations.
Zhang H; Qian J; Gao J; Yang J; Xu C
IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2825-2839. PubMed ID: 30668503
[TBL] [Abstract][Full Text] [Related]
16. Gradient flows and proximal splitting methods: A unified view on accelerated and stochastic optimization.
França G; Robinson DP; Vidal R
Phys Rev E; 2021 May; 103(5-1):053304. PubMed ID: 34134224
[TBL] [Abstract][Full Text] [Related]
17. Accelerated Mini-batch Randomized Block Coordinate Descent Method.
Zhao T; Yu M; Wang Y; Arora R; Liu H
Adv Neural Inf Process Syst; 2014 Dec; 27():5614. PubMed ID: 25620860
[TBL] [Abstract][Full Text] [Related]
18. Novel projection neurodynamic approaches for constrained convex optimization.
Zhao Y; Liao X; He X
Neural Netw; 2022 Jun; 150():336-349. PubMed ID: 35344705
[TBL] [Abstract][Full Text] [Related]
19. On the Convergence Analysis of the Optimized Gradient Method.
Kim D; Fessler JA
J Optim Theory Appl; 2017 Jan; 172(1):187-205. PubMed ID: 28461707
[TBL] [Abstract][Full Text] [Related]
20. A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization.
Liu J; Liao X; Dong JS; Mansoori A
Neural Netw; 2023 Mar; 160():259-273. PubMed ID: 36709530
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]