These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
136 related articles for article (PubMed ID: 37284602)
1. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent. Pu S; Olshevsky A; Paschalidis IC IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602 [TBL] [Abstract][Full Text] [Related]
2. Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning. Pu S; Olshevsky A; Paschalidis IC IEEE Signal Process Mag; 2020 May; 37(3):114-122. PubMed ID: 33746471 [TBL] [Abstract][Full Text] [Related]
4. Decentralized stochastic sharpness-aware minimization algorithm. Chen S; Deng X; Xu D; Sun T; Li D Neural Netw; 2024 Aug; 176():106325. PubMed ID: 38653126 [TBL] [Abstract][Full Text] [Related]
5. Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise. Yu Z; Ho DWC; Yuan D; Liu J IEEE Trans Cybern; 2023 Jun; 53(6):3561-3573. PubMed ID: 34818207 [TBL] [Abstract][Full Text] [Related]
7. Dualityfree Methods for Stochastic Composition Optimization. Liu L; Liu J; Tao D IEEE Trans Neural Netw Learn Syst; 2019 Apr; 30(4):1205-1217. PubMed ID: 30222587 [TBL] [Abstract][Full Text] [Related]
9. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization. Tao W; Pan Z; Wu G; Tao Q IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139 [TBL] [Abstract][Full Text] [Related]
10. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions. Lei Y; Hu T; Li G; Tang K IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449 [TBL] [Abstract][Full Text] [Related]
11. Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Gower RM; Richtárik P; Bach F Math Program; 2021; 188(1):135-192. PubMed ID: 34720193 [TBL] [Abstract][Full Text] [Related]
12. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence. Tao W; Pan Z; Wu G; Tao Q IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303 [TBL] [Abstract][Full Text] [Related]
13. Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm. Yuan D; Ho DWC; Xu S IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2344-2357. PubMed ID: 32614775 [TBL] [Abstract][Full Text] [Related]