These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
89 related articles for article (PubMed ID: 18282868)
1. A new back-propagation algorithm with coupled neuron. Fukumi M; Omatu S IEEE Trans Neural Netw; 1991; 2(5):535-8. PubMed ID: 18282868 [TBL] [Abstract][Full Text] [Related]
2. A new backpropagation learning algorithm for layered neural networks with nondifferentiable units. Oohori T; Naganuma H; Watanabe K Neural Comput; 2007 May; 19(5):1422-35. PubMed ID: 17381272 [TBL] [Abstract][Full Text] [Related]
6. A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm. Abid S; Fnaiech F; Najim M IEEE Trans Neural Netw; 2001; 12(2):424-30. PubMed ID: 18244397 [TBL] [Abstract][Full Text] [Related]
7. A local linearized least squares algorithm for training feedforward neural networks. Stan O; Kamen E IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777 [TBL] [Abstract][Full Text] [Related]
8. A linear recurrent kernel online learning algorithm with sparse updates. Fan H; Song Q Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551 [TBL] [Abstract][Full Text] [Related]
9. Novel maximum-margin training algorithms for supervised neural networks. Ludwig O; Nunes U IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990 [TBL] [Abstract][Full Text] [Related]
10. Training two-layered feedforward networks with variable projection method. Kim CT; Lee JJ IEEE Trans Neural Netw; 2008 Feb; 19(2):371-5. PubMed ID: 18269969 [TBL] [Abstract][Full Text] [Related]
11. Equivalence of backpropagation and contrastive Hebbian learning in a layered network. Xie X; Seung HS Neural Comput; 2003 Feb; 15(2):441-54. PubMed ID: 12590814 [TBL] [Abstract][Full Text] [Related]
12. Reformulated radial basis neural networks trained by gradient descent. Karayiannis NB IEEE Trans Neural Netw; 1999; 10(3):657-71. PubMed ID: 18252566 [TBL] [Abstract][Full Text] [Related]
13. A generalized learning paradigm exploiting the structure of feedforward neural networks. Parisi R; Di Claudio ED; Orlandi G; Rao BD IEEE Trans Neural Netw; 1996; 7(6):1450-60. PubMed ID: 18263538 [TBL] [Abstract][Full Text] [Related]
14. Back-propagation is not Efficient. Síma J Neural Netw; 1996 Aug; 9(6):1017-1023. PubMed ID: 12662580 [TBL] [Abstract][Full Text] [Related]
15. Convergence analysis of online gradient method for BP neural networks. Wu W; Wang J; Cheng M; Li Z Neural Netw; 2011 Jan; 24(1):91-8. PubMed ID: 20870390 [TBL] [Abstract][Full Text] [Related]
16. The No-Prop algorithm: a new learning algorithm for multilayer neural networks. Widrow B; Greenblatt A; Kim Y; Park D Neural Netw; 2013 Jan; 37():182-8. PubMed ID: 23140797 [TBL] [Abstract][Full Text] [Related]
17. Learning and convergence analysis of neural-type structured networks. Polycarpou MM; Ioannou PA IEEE Trans Neural Netw; 1992; 3(1):39-50. PubMed ID: 18276404 [TBL] [Abstract][Full Text] [Related]
18. Dynamic learning rate optimization of the backpropagation algorithm. Yu XH; Chen GA; Cheng SX IEEE Trans Neural Netw; 1995; 6(3):669-77. PubMed ID: 18263352 [TBL] [Abstract][Full Text] [Related]
19. Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms. Nitta T; Kuroe Y IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1689-1702. PubMed ID: 28358692 [TBL] [Abstract][Full Text] [Related]
20. Magnified gradient function with deterministic weight modification in adaptive learning. Ng SC; Cheung CC; Leung SH IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]