These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
122 related articles for article (PubMed ID: 38653126)
1. Decentralized stochastic sharpness-aware minimization algorithm. Chen S; Deng X; Xu D; Sun T; Li D Neural Netw; 2024 Aug; 176():106325. PubMed ID: 38653126 [TBL] [Abstract][Full Text] [Related]
2. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks. Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247 [TBL] [Abstract][Full Text] [Related]
4. Sharpness-Aware Lookahead for Accelerating Convergence and Improving Generalization. Tan C; Zhang J; Liu J; Gong Y IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):10375-10388. PubMed ID: 39146156 [TBL] [Abstract][Full Text] [Related]
6. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent. Pu S; Olshevsky A; Paschalidis IC IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602 [TBL] [Abstract][Full Text] [Related]
7. Exploring Regularization Methods for Domain Generalization in Accelerometer-Based Human Activity Recognition. Bento N; Rebelo J; Carreiro AV; Ravache F; Barandas M Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514805 [TBL] [Abstract][Full Text] [Related]
8. A Graph Neural Network Based Decentralized Learning Scheme. Gao H; Lee M; Yu G; Zhou Z Sensors (Basel); 2022 Jan; 22(3):. PubMed ID: 35161776 [TBL] [Abstract][Full Text] [Related]
9. FedGAMMA: Federated Learning With Global Sharpness-Aware Minimization. Dai R; Yang X; Sun Y; Shen L; Tian X; Wang M; Zhang Y IEEE Trans Neural Netw Learn Syst; 2023 Oct; PP():. PubMed ID: 37788191 [TBL] [Abstract][Full Text] [Related]
10. Personalized On-Device E-Health Analytics With Decentralized Block Coordinate Descent. Ye G; Yin H; Chen T; Xu M; Nguyen QVH; Song J IEEE J Biomed Health Inform; 2022 Jun; 26(6):2778-2786. PubMed ID: 34986109 [TBL] [Abstract][Full Text] [Related]
11. Regularizing Scale-Adaptive Central Moment Sharpness for Neural Networks. Chen J; Guo Z; Li H; Chen CLP IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6452-6466. PubMed ID: 36215387 [TBL] [Abstract][Full Text] [Related]
12. Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes. Christensen S; Kallsen J Neural Comput; 2024 Jun; 36(7):1424-1432. PubMed ID: 38669690 [TBL] [Abstract][Full Text] [Related]
13. Dominating Set Model Aggregation for communication-efficient decentralized deep learning. Fotouhi F; Balu A; Jiang Z; Esfandiari Y; Jahani S; Sarkar S Neural Netw; 2024 Mar; 171():25-39. PubMed ID: 38091762 [TBL] [Abstract][Full Text] [Related]
14. Block-cyclic stochastic coordinate descent for deep neural networks. Nakamura K; Soatto S; Hong BW Neural Netw; 2021 Jul; 139():348-357. PubMed ID: 33887584 [TBL] [Abstract][Full Text] [Related]
15. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks. Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265 [TBL] [Abstract][Full Text] [Related]
16. Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers. Paquin AL; Chaib-Draa B; Giguère P Neural Netw; 2023 Jul; 164():382-394. PubMed ID: 37167751 [TBL] [Abstract][Full Text] [Related]
17. ASD+M: Automatic parameter tuning in stochastic optimization and on-line learning. Wawrzyński P Neural Netw; 2017 Dec; 96():1-10. PubMed ID: 28950104 [TBL] [Abstract][Full Text] [Related]
18. Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression. Le Thi HA; Le HM; Phan DN; Tran B Neural Netw; 2020 Dec; 132():220-231. PubMed ID: 32919312 [TBL] [Abstract][Full Text] [Related]