215 related articles for article (PubMed ID: 35640371)
1. A multivariate adaptive gradient algorithm with reduced tuning efforts.
Saab S; Saab K; Phoha S; Zhu M; Ray A
Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
[TBL] [Abstract][Full Text] [Related]
2. Adaptive Restart of the Optimized Gradient Method for Convex Optimization.
Kim D; Fessler JA
J Optim Theory Appl; 2018 Jul; 178(1):240-263. PubMed ID: 36341472
[TBL] [Abstract][Full Text] [Related]
3. Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization.
Jiang X; Zeng X; Sun J; Chen J
IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5310-5321. PubMed ID: 35536804
[TBL] [Abstract][Full Text] [Related]
4. A novel adaptive momentum method for medical image classification using convolutional neural network.
Aytaç UC; Güneş A; Ajlouni N
BMC Med Imaging; 2022 Mar; 22(1):34. PubMed ID: 35232390
[TBL] [Abstract][Full Text] [Related]
5. Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.
Liu J; Li B; Zhou Y; Zhao X; Zhu J; Zhang M
Comput Intell Neurosci; 2022; 2022():9337209. PubMed ID: 35694581
[TBL] [Abstract][Full Text] [Related]
6. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D
Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247
[TBL] [Abstract][Full Text] [Related]
7. Gradient regularization of Newton method with Bregman distances.
Doikov N; Nesterov Y
Math Program; 2024; 204(1-2):1-25. PubMed ID: 38371323
[TBL] [Abstract][Full Text] [Related]
8. Selecting the best optimizers for deep learning-based medical image segmentation.
Mortazi A; Cicek V; Keles E; Bagci U
Front Radiol; 2023; 3():1175473. PubMed ID: 37810757
[TBL] [Abstract][Full Text] [Related]
9. A fast saddle-point dynamical system approach to robust deep learning.
Esfandiari Y; Balu A; Ebrahimi K; Vaidya U; Elia N; Sarkar S
Neural Netw; 2021 Jul; 139():33-44. PubMed ID: 33677377
[TBL] [Abstract][Full Text] [Related]
10. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.
Liu J; Kong J; Xu D; Qi M; Lu Y
Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445
[TBL] [Abstract][Full Text] [Related]
11. LipGene: Lipschitz Continuity Guided Adaptive Learning Rates for Fast Convergence on Microarray Expression Data Sets.
Prashanth T; Saha S; Basarkod S; Aralihalli S; Dhavala SS; Saha S; Aduri R
IEEE/ACM Trans Comput Biol Bioinform; 2022; 19(6):3553-3563. PubMed ID: 34495836
[TBL] [Abstract][Full Text] [Related]
12. Stochastic momentum methods for non-convex learning without bounded assumptions.
Liang Y; Liu J; Xu D
Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
[TBL] [Abstract][Full Text] [Related]
13. Piecewise convexity of artificial neural networks.
Rister B; Rubin DL
Neural Netw; 2017 Oct; 94():34-45. PubMed ID: 28732233
[TBL] [Abstract][Full Text] [Related]
14. On the Convergence Analysis of the Optimized Gradient Method.
Kim D; Fessler JA
J Optim Theory Appl; 2017 Jan; 172(1):187-205. PubMed ID: 28461707
[TBL] [Abstract][Full Text] [Related]
15. Training Neural Networks by Lifted Proximal Operator Machines.
Li J; Xiao M; Fang C; Dai Y; Xu C; Lin Z
IEEE Trans Pattern Anal Mach Intell; 2022 Jun; 44(6):3334-3348. PubMed ID: 33382647
[TBL] [Abstract][Full Text] [Related]
16. A Minibatch Proximal Stochastic Recursive Gradient Algorithm Using a Trust-Region-Like Scheme and Barzilai-Borwein Stepsizes.
Yu T; Liu XW; Dai YH; Sun J
IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4627-4638. PubMed ID: 33021942
[TBL] [Abstract][Full Text] [Related]
17. Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.
Hishinuma K; Iiduka H
Front Robot AI; 2019; 6():77. PubMed ID: 33501092
[TBL] [Abstract][Full Text] [Related]
18. Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates.
Boţ RI; Csetnek ER; Nguyen DK
Math Program; 2023; 200(1):147-197. PubMed ID: 37215306
[TBL] [Abstract][Full Text] [Related]
19. Dualityfree Methods for Stochastic Composition Optimization.
Liu L; Liu J; Tao D
IEEE Trans Neural Netw Learn Syst; 2019 Apr; 30(4):1205-1217. PubMed ID: 30222587
[TBL] [Abstract][Full Text] [Related]
20. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]