These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
143 related articles for article (PubMed ID: 9804675)
1. Complexity issues in natural gradient descent method for training multilayer perceptrons. Yang HH; Amari S Neural Comput; 1998 Nov; 10(8):2137-57. PubMed ID: 9804675 [TBL] [Abstract][Full Text] [Related]
2. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Auer P; Burgsteiner H; Maass W Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524 [TBL] [Abstract][Full Text] [Related]
3. Adaptive method of realizing natural gradient learning for multilayer perceptrons. Amari S; Park H; Fukumizu K Neural Comput; 2000 Jun; 12(6):1399-409. PubMed ID: 10935719 [TBL] [Abstract][Full Text] [Related]
4. Adaptive natural gradient learning algorithms for various stochastic models. Park H; Amari SI; Fukumizu K Neural Netw; 2000 Sep; 13(7):755-64. PubMed ID: 11152207 [TBL] [Abstract][Full Text] [Related]
5. Novel maximum-margin training algorithms for supervised neural networks. Ludwig O; Nunes U IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990 [TBL] [Abstract][Full Text] [Related]
6. Complexity of error hypersurfaces in multilayer perceptrons. Liang X Int J Neural Syst; 2004 Jun; 14(3):189-200. PubMed ID: 15243951 [TBL] [Abstract][Full Text] [Related]
7. Methods of training and constructing multilayer perceptrons with arbitrary pattern sets. Liang X; Xia S Int J Neural Syst; 1995 Sep; 6(3):233-47. PubMed ID: 8589861 [TBL] [Abstract][Full Text] [Related]
8. Effective neural network training with adaptive learning rate based on training loss. Takase T; Oyama S; Kurihara M Neural Netw; 2018 May; 101():68-78. PubMed ID: 29494873 [TBL] [Abstract][Full Text] [Related]
9. Specification of training sets and the number of hidden neurons for multilayer perceptrons. Camargo LS; Yoneyama T Neural Comput; 2001 Dec; 13(12):2673-80. PubMed ID: 11705406 [TBL] [Abstract][Full Text] [Related]
10. A fast and convergent stochastic MLP learning algorithm. Sakurai A Int J Neural Syst; 2001 Dec; 11(6):573-83. PubMed ID: 11852440 [TBL] [Abstract][Full Text] [Related]
11. A fast and scalable recurrent neural network based on stochastic meta descent. Liu Z; Elhanany I IEEE Trans Neural Netw; 2008 Sep; 19(9):1652-8. PubMed ID: 18779096 [TBL] [Abstract][Full Text] [Related]
12. Learning curves for stochastic gradient descent in linear feedforward networks. Werfel J; Xie X; Seung HS Neural Comput; 2005 Dec; 17(12):2699-718. PubMed ID: 16212768 [TBL] [Abstract][Full Text] [Related]
13. Dynamics of learning near singularities in layered networks. Wei H; Zhang J; Cousseau F; Ozeki T; Amari S Neural Comput; 2008 Mar; 20(3):813-43. PubMed ID: 18045020 [TBL] [Abstract][Full Text] [Related]
14. On 'natural' learning and pruning in multi-layered perceptrons. Heskes T Neural Comput; 2000 Apr; 12(4):881-901. PubMed ID: 10770836 [TBL] [Abstract][Full Text] [Related]
15. How dependencies between successive examples affect on-line learning. Wiegerinck W; Heskes T Neural Comput; 1996 Nov; 8(8):1743-65. PubMed ID: 8888616 [TBL] [Abstract][Full Text] [Related]
16. Dynamics of learning in multilayer perceptrons near singularities. Cousseau F; Ozeki T; Amari S IEEE Trans Neural Netw; 2008 Aug; 19(8):1313-28. PubMed ID: 18701364 [TBL] [Abstract][Full Text] [Related]