These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
5. Backpropagation and ordered derivatives in the time scales calculus. Seiffertt J; Wunsch DC IEEE Trans Neural Netw; 2010 Aug; 21(8):1262-9. PubMed ID: 20615808 [TBL] [Abstract][Full Text] [Related]
6. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. Song Q; Wu Y; Soh YC IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640 [TBL] [Abstract][Full Text] [Related]
7. Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms. Nitta T; Kuroe Y IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1689-1702. PubMed ID: 28358692 [TBL] [Abstract][Full Text] [Related]
8. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Wang J; Wen Y; Gou Y; Ye Z; Chen H Neural Netw; 2017 May; 89():19-30. PubMed ID: 28278430 [TBL] [Abstract][Full Text] [Related]
9. An analog VLSI recurrent neural network learning a continuous-time trajectory. Cauwenberghs G IEEE Trans Neural Netw; 1996; 7(2):346-61. PubMed ID: 18255589 [TBL] [Abstract][Full Text] [Related]
10. High-order neural network structures for identification of dynamical systems. Kosmatopoulos EB; Polycarpou MM; Christodoulou MA; Ioannou PA IEEE Trans Neural Netw; 1995; 6(2):422-31. PubMed ID: 18263324 [TBL] [Abstract][Full Text] [Related]
11. Recurrent neural-network training by a learning automaton approach for trajectory learning and control system design. Sudareshan MK; Condarcure TA IEEE Trans Neural Netw; 1998; 9(3):354-68. PubMed ID: 18252461 [TBL] [Abstract][Full Text] [Related]
13. Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks. Puskorius GV; Feldkamp LA IEEE Trans Neural Netw; 1994; 5(2):279-97. PubMed ID: 18267797 [TBL] [Abstract][Full Text] [Related]
14. Novel maximum-margin training algorithms for supervised neural networks. Ludwig O; Nunes U IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990 [TBL] [Abstract][Full Text] [Related]
15. Gradient descent learning in and out of equilibrium. Caticha N; Araújo de Oliveira E Phys Rev E Stat Nonlin Soft Matter Phys; 2001 Jun; 63(6 Pt 1):061905. PubMed ID: 11415143 [TBL] [Abstract][Full Text] [Related]
16. Extended Hamiltonian learning on Riemannian manifolds: theoretical aspects. Fiori S IEEE Trans Neural Netw; 2011 May; 22(5):687-700. PubMed ID: 21427023 [TBL] [Abstract][Full Text] [Related]
17. Variational data assimilation for the initial-value dynamo problem. Li K; Jackson A; Livermore PW Phys Rev E Stat Nonlin Soft Matter Phys; 2011 Nov; 84(5 Pt 2):056321. PubMed ID: 22181512 [TBL] [Abstract][Full Text] [Related]
18. Comments on "Backpropagation algorithms for a broad class of dynamic networks". Endisch C; Stolze P; Hackl C; Schroder D IEEE Trans Neural Netw; 2009 Mar; 20(3):540-1. PubMed ID: 19211353 [TBL] [Abstract][Full Text] [Related]
19. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method. Ghosh R; Yearwood J; Ghosh M; Bagirov A Int J Neural Syst; 2006 Jun; 16(3):201-13. PubMed ID: 17044241 [TBL] [Abstract][Full Text] [Related]
20. A fast and scalable recurrent neural network based on stochastic meta descent. Liu Z; Elhanany I IEEE Trans Neural Netw; 2008 Sep; 19(9):1652-8. PubMed ID: 18779096 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]