These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
7. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction. Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253 [TBL] [Abstract][Full Text] [Related]
8. Decentralized stochastic sharpness-aware minimization algorithm. Chen S; Deng X; Xu D; Sun T; Li D Neural Netw; 2024 Aug; 176():106325. PubMed ID: 38653126 [TBL] [Abstract][Full Text] [Related]
9. Learning curves for stochastic gradient descent in linear feedforward networks. Werfel J; Xie X; Seung HS Neural Comput; 2005 Dec; 17(12):2699-718. PubMed ID: 16212768 [TBL] [Abstract][Full Text] [Related]
10. Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks. Bagley BA; Bordelon B; Moseley B; Wessel R PLoS One; 2020; 15(2):e0229083. PubMed ID: 32092107 [TBL] [Abstract][Full Text] [Related]
11. Optimization and applications of echo state networks with leaky-integrator neurons. Jaeger H; Lukosevicius M; Popovici D; Siewert U Neural Netw; 2007 Apr; 20(3):335-52. PubMed ID: 17517495 [TBL] [Abstract][Full Text] [Related]
12. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks. Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265 [TBL] [Abstract][Full Text] [Related]
13. Block-cyclic stochastic coordinate descent for deep neural networks. Nakamura K; Soatto S; Hong BW Neural Netw; 2021 Jul; 139():348-357. PubMed ID: 33887584 [TBL] [Abstract][Full Text] [Related]
15. A solution to the learning dilemma for recurrent networks of spiking neurons. Bellec G; Scherr F; Subramoney A; Hajek E; Salaj D; Legenstein R; Maass W Nat Commun; 2020 Jul; 11(1):3625. PubMed ID: 32681001 [TBL] [Abstract][Full Text] [Related]
16. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Wang X; Lin X; Dang X Neural Netw; 2020 May; 125():258-280. PubMed ID: 32146356 [TBL] [Abstract][Full Text] [Related]
17. Accelerating deep neural network training with inconsistent stochastic gradient descent. Wang L; Yang Y; Min R; Chakradhar S Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660 [TBL] [Abstract][Full Text] [Related]
18. ASD+M: Automatic parameter tuning in stochastic optimization and on-line learning. WawrzyĆski P Neural Netw; 2017 Dec; 96():1-10. PubMed ID: 28950104 [TBL] [Abstract][Full Text] [Related]
19. Computational Principles of Supervised Learning in the Cerebellum. Raymond JL; Medina JF Annu Rev Neurosci; 2018 Jul; 41():233-253. PubMed ID: 29986160 [TBL] [Abstract][Full Text] [Related]
20. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Xu Y; Zeng X; Han L; Yang J Neural Netw; 2013 Jul; 43():99-113. PubMed ID: 23500504 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]