These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
199 related articles for article (PubMed ID: 19435681)
1. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. Zhang H; Wu W; Liu F; Yao M IEEE Trans Neural Netw; 2009 Jun; 20(6):1050-4. PubMed ID: 19435681 [TBL] [Abstract][Full Text] [Related]
2. Training pi-sigma network by online gradient algorithm with penalty for small weight update. Xiong Y; Wu W; Kang X; Zhang C Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657 [TBL] [Abstract][Full Text] [Related]
3. Convergence of gradient method with momentum for two-layer feedforward neural networks. Zhang N; Wu W; Zheng G IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479 [TBL] [Abstract][Full Text] [Related]
4. Analysis of boundedness and convergence of online gradient method for two-layer feedforward neural networks. Lu Xu ; Jinshu Chen ; Defeng Huang ; Jianhua Lu ; Licai Fang IEEE Trans Neural Netw Learn Syst; 2013 Aug; 24(8):1327-38. PubMed ID: 24808571 [TBL] [Abstract][Full Text] [Related]
5. Recurrent neural networks training with stable bounding ellipsoid algorithm. Yu W; de Jesús Rubio J IEEE Trans Neural Netw; 2009 Jun; 20(6):983-91. PubMed ID: 19447727 [TBL] [Abstract][Full Text] [Related]
6. Magnified gradient function with deterministic weight modification in adaptive learning. Ng SC; Cheung CC; Leung SH IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769 [TBL] [Abstract][Full Text] [Related]
7. Global convergence and limit cycle behavior of weights of perceptron. Ho CY; Ling BW; Lam HK; Nasir MH IEEE Trans Neural Netw; 2008 Jun; 19(6):938-47. PubMed ID: 18541495 [TBL] [Abstract][Full Text] [Related]
8. Implementing online natural gradient learning: problems and solutions. Wan W IEEE Trans Neural Netw; 2006 Mar; 17(2):317-29. PubMed ID: 16566461 [TBL] [Abstract][Full Text] [Related]
9. A recurrent neural network for solving bilevel linear programming problem. He X; Li C; Huang T; Li C; Huang J IEEE Trans Neural Netw Learn Syst; 2014 Apr; 25(4):824-30. PubMed ID: 24807959 [TBL] [Abstract][Full Text] [Related]
10. Deterministic convergence of an online gradient method for BP neural networks. Wu W; Feng G; Li Z; Xu Y IEEE Trans Neural Netw; 2005 May; 16(3):533-40. PubMed ID: 15940984 [TBL] [Abstract][Full Text] [Related]
11. Toward the training of feed-forward neural networks with the D-optimum input sequence. Witczak M IEEE Trans Neural Netw; 2006 Mar; 17(2):357-73. PubMed ID: 16566464 [TBL] [Abstract][Full Text] [Related]
12. On the weight convergence of Elman networks. Song Q IEEE Trans Neural Netw; 2010 Mar; 21(3):463-80. PubMed ID: 20129857 [TBL] [Abstract][Full Text] [Related]
13. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. Song Q; Wu Y; Soh YC IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640 [TBL] [Abstract][Full Text] [Related]
14. Computational properties and convergence analysis of BPNN for cyclic and almost cyclic learning with penalty. Wang J; Wu W; Zurada JM Neural Netw; 2012 Sep; 33():127-35. PubMed ID: 22622263 [TBL] [Abstract][Full Text] [Related]
15. When does online BP training converge? Xu ZB; Zhang R; Jing WF IEEE Trans Neural Netw; 2009 Oct; 20(10):1529-39. PubMed ID: 19695997 [TBL] [Abstract][Full Text] [Related]
16. Subgradient-based neural networks for nonsmooth nonconvex optimization problems. Bian W; Xue X IEEE Trans Neural Netw; 2009 Jun; 20(6):1024-38. PubMed ID: 19457749 [TBL] [Abstract][Full Text] [Related]
17. Error minimized extreme learning machine with growth of hidden nodes and incremental learning. Feng G; Huang GB; Lin Q; Gay R IEEE Trans Neural Netw; 2009 Aug; 20(8):1352-7. PubMed ID: 19596632 [TBL] [Abstract][Full Text] [Related]
18. Performance of the Bayesian online algorithm for the perceptron. de Oliveira EA; Alamino RC IEEE Trans Neural Netw; 2007 May; 18(3):902-5. PubMed ID: 17526354 [TBL] [Abstract][Full Text] [Related]
19. Global convergence of online BP training with dynamic learning rate. Zhang R; Xu ZB; Huang GB; Wang D IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):330-41. PubMed ID: 24808511 [TBL] [Abstract][Full Text] [Related]
20. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks. Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]