These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
2. An Efficient Preconditioner for Stochastic Gradient Descent Optimization of Image Registration. Qiao Y; Lelieveldt BPF; Staring M IEEE Trans Med Imaging; 2019 Oct; 38(10):2314-2325. PubMed ID: 30762536 [TBL] [Abstract][Full Text] [Related]
3. Weighted SGD for ℓ Yang J; Chow YL; Ré C; Mahoney MW Proc Annu ACM SIAM Symp Discret Algorithms; 2016 Jan; 2016():558-569. PubMed ID: 29782626 [TBL] [Abstract][Full Text] [Related]
4. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions. Lei Y; Hu T; Li G; Tang K IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449 [TBL] [Abstract][Full Text] [Related]
5. Accelerating deep neural network training with inconsistent stochastic gradient descent. Wang L; Yang Y; Min R; Chakradhar S Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660 [TBL] [Abstract][Full Text] [Related]
6. Newton-Raphson preconditioner for Krylov type solvers on GPU devices. Kushida N Springerplus; 2016; 5(1):788. PubMed ID: 27386273 [TBL] [Abstract][Full Text] [Related]
7. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling. Peng X; Li L; Wang FY IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442 [TBL] [Abstract][Full Text] [Related]
9. A limited-memory, quasi-Newton preconditioner for nonnegatively constrained image reconstruction. Bardsley JM J Opt Soc Am A Opt Image Sci Vis; 2004 May; 21(5):724-31. PubMed ID: 15139424 [TBL] [Abstract][Full Text] [Related]
10. Surface structure feature matching algorithm for cardiac motion estimation. Zhang Z; Yang X; Tan C; Guo W; Chen G BMC Med Inform Decis Mak; 2017 Dec; 17(Suppl 3):172. PubMed ID: 29297330 [TBL] [Abstract][Full Text] [Related]
11. Faster Stochastic Quasi-Newton Methods. Zhang Q; Huang F; Deng C; Huang H IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4388-4397. PubMed ID: 33667166 [TBL] [Abstract][Full Text] [Related]
12. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics. Fioresi R; Chaudhari P; Soatto S Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876 [TBL] [Abstract][Full Text] [Related]
13. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent. Pu S; Olshevsky A; Paschalidis IC IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602 [TBL] [Abstract][Full Text] [Related]
14. A mean field view of the landscape of two-layer neural networks. Mei S; Montanari A; Nguyen PM Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315 [TBL] [Abstract][Full Text] [Related]
15. Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Gower RM; Richtárik P; Bach F Math Program; 2021; 188(1):135-192. PubMed ID: 34720193 [TBL] [Abstract][Full Text] [Related]
16. Learning Rates for Stochastic Gradient Descent With Nonconvex Objectives. Lei Y; Tang K IEEE Trans Pattern Anal Mach Intell; 2021 Dec; 43(12):4505-4511. PubMed ID: 33755555 [TBL] [Abstract][Full Text] [Related]
17. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization. Tao W; Pan Z; Wu G; Tao Q IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139 [TBL] [Abstract][Full Text] [Related]