These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
23. Data classification based on fractional order gradient descent with momentum for RBF neural network. Xue H; Shao Z; Sun H Network; 2020; 31(1-4):166-185. PubMed ID: 33283569 [TBL] [Abstract][Full Text] [Related]
24. A linear recurrent kernel online learning algorithm with sparse updates. Fan H; Song Q Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551 [TBL] [Abstract][Full Text] [Related]
25. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Xu D; Zhang H; Liu L Neural Comput; 2010 Oct; 22(10):2655-77. PubMed ID: 20608871 [TBL] [Abstract][Full Text] [Related]
26. Supervised learning on large redundant training sets. Møller M Int J Neural Syst; 1993 Mar; 4(1):15-25. PubMed ID: 8049786 [TBL] [Abstract][Full Text] [Related]
27. Composite adaptive control with locally weighted statistical learning. Nakanishi J; Farrell JA; Schaal S Neural Netw; 2005 Jan; 18(1):71-90. PubMed ID: 15649663 [TBL] [Abstract][Full Text] [Related]
28. Neural network learning with global heuristic search. Jordanov I; Georgieva A IEEE Trans Neural Netw; 2007 May; 18(3):937-42. PubMed ID: 17526362 [TBL] [Abstract][Full Text] [Related]
29. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Bhaya A; Kaszkurewicz E Neural Netw; 2004 Jan; 17(1):65-71. PubMed ID: 14690708 [TBL] [Abstract][Full Text] [Related]
30. A modified error backpropagation algorithm for complex-value neural networks. Chen X; Tang Z; Variappan C; Li S; Okada T Int J Neural Syst; 2005 Dec; 15(6):435-43. PubMed ID: 16385633 [TBL] [Abstract][Full Text] [Related]
31. A constrained optimization approach to preserving prior knowledge during incremental training. Ferrari S; Jensenius M IEEE Trans Neural Netw; 2008 Jun; 19(6):996-1009. PubMed ID: 18541500 [TBL] [Abstract][Full Text] [Related]
32. Accelerated learning by active example selection. Zhang BT Int J Neural Syst; 1994 Mar; 5(1):67-75. PubMed ID: 7921386 [TBL] [Abstract][Full Text] [Related]
33. Error analysis of stochastic gradient descent ranking. Chen H; Tang Y; Li L; Yuan Y; Li X; Tang Y IEEE Trans Cybern; 2013 Jun; 43(3):898-909. PubMed ID: 24083315 [TBL] [Abstract][Full Text] [Related]
34. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Amari S; Park H; Fukumizu K Neural Comput; 2000 Jun; 12(6):1399-409. PubMed ID: 10935719 [TBL] [Abstract][Full Text] [Related]
36. Training pi-sigma network by online gradient algorithm with penalty for small weight update. Xiong Y; Wu W; Kang X; Zhang C Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657 [TBL] [Abstract][Full Text] [Related]
38. A modified backpropagation learning algorithm with added emotional coefficients. Khashman A IEEE Trans Neural Netw; 2008 Nov; 19(11):1896-909. PubMed ID: 18990644 [TBL] [Abstract][Full Text] [Related]
39. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks. Man Z; Wu HR; Liu S; Yu X IEEE Trans Neural Netw; 2006 Nov; 17(6):1580-91. PubMed ID: 17131670 [TBL] [Abstract][Full Text] [Related]
40. Convergence of gradient method with momentum for two-layer feedforward neural networks. Zhang N; Wu W; Zheng G IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479 [TBL] [Abstract][Full Text] [Related] [Previous] [Next] [New Search]