549 related articles for article (PubMed ID: 17381272)
1. A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.
Oohori T; Naganuma H; Watanabe K
Neural Comput; 2007 May; 19(5):1422-35. PubMed ID: 17381272
[TBL] [Abstract][Full Text] [Related]
2. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
Auer P; Burgsteiner H; Maass W
Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
[TBL] [Abstract][Full Text] [Related]
3. How inhibitory oscillations can train neural networks and punish competitors.
Norman KA; Newman E; Detre G; Polyn S
Neural Comput; 2006 Jul; 18(7):1577-610. PubMed ID: 16764515
[TBL] [Abstract][Full Text] [Related]
4. Equivalence of backpropagation and contrastive Hebbian learning in a layered network.
Xie X; Seung HS
Neural Comput; 2003 Feb; 15(2):441-54. PubMed ID: 12590814
[TBL] [Abstract][Full Text] [Related]
5. A hybrid learning network for shift, orientation, and scaling invariant pattern recognition.
Wang R
Network; 2001 Nov; 12(4):493-512. PubMed ID: 11762901
[TBL] [Abstract][Full Text] [Related]
6. Polynomial harmonic GMDH learning networks for time series modeling.
Nikolaev NY; Iba H
Neural Netw; 2003 Dec; 16(10):1527-40. PubMed ID: 14622880
[TBL] [Abstract][Full Text] [Related]
7. Meta-learning approach to neural network optimization.
Kordík P; Koutník J; Drchal J; Kovárík O; Cepek M; Snorek M
Neural Netw; 2010 May; 23(4):568-82. PubMed ID: 20227243
[TBL] [Abstract][Full Text] [Related]
8. Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning.
Steil JJ
Neural Netw; 2007 Apr; 20(3):353-64. PubMed ID: 17517491
[TBL] [Abstract][Full Text] [Related]
9. TAO-robust backpropagation learning algorithm.
Pernía-Espinoza AV; Ordieres-Meré JB; Martínez-de-Pisón FJ; González-Marcos A
Neural Netw; 2005 Mar; 18(2):191-204. PubMed ID: 15795116
[TBL] [Abstract][Full Text] [Related]
10. Learning multiple layers of representation.
Hinton GE
Trends Cogn Sci; 2007 Oct; 11(10):428-34. PubMed ID: 17921042
[TBL] [Abstract][Full Text] [Related]
11. Finite state automata resulting from temporal information maximization and a temporal learning rule.
Wennekers T; Ay N
Neural Comput; 2005 Oct; 17(10):2258-90. PubMed ID: 16105225
[TBL] [Abstract][Full Text] [Related]
12. Learning algorithms based on linearization.
Hahnloser R
Network; 1998 Aug; 9(3):363-80. PubMed ID: 9861996
[TBL] [Abstract][Full Text] [Related]
13. On the classification capability of sign-constrained perceptrons.
Legenstein R; Maass W
Neural Comput; 2008 Jan; 20(1):288-309. PubMed ID: 18045010
[TBL] [Abstract][Full Text] [Related]
14. Neural associative memory with optimal Bayesian learning.
Knoblauch A
Neural Comput; 2011 Jun; 23(6):1393-451. PubMed ID: 21395440
[TBL] [Abstract][Full Text] [Related]
15. Learning in human neural networks on microelectrode arrays.
Pizzi R; Cino G; Gelain F; Rossetti D; Vescovi A
Biosystems; 2007 Mar; 88(1-2):1-15. PubMed ID: 16843590
[TBL] [Abstract][Full Text] [Related]
16. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.
Shimansky YP
Biol Cybern; 2009 Dec; 101(5-6):379-85. PubMed ID: 19844738
[TBL] [Abstract][Full Text] [Related]
17. An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.
García-Pedrajas N; Ortiz-Boyer D; Hervás-Martínez C
Neural Netw; 2006 May; 19(4):514-28. PubMed ID: 16343847
[TBL] [Abstract][Full Text] [Related]
18. Improving generalization capabilities of dynamic neural networks.
Galicki M; Leistritz L; Zwick EB; Witte H
Neural Comput; 2004 Jun; 16(6):1253-82. PubMed ID: 15130249
[TBL] [Abstract][Full Text] [Related]
19. Combining Hebbian and reinforcement learning in a minibrain model.
Bosman RJ; van Leeuwen WA; Wemmenhove B
Neural Netw; 2004 Jan; 17(1):29-36. PubMed ID: 14690704
[TBL] [Abstract][Full Text] [Related]
20. [Neuronal nets].
Wieding JU; Schönle PW
Nervenarzt; 1991 Jul; 62(7):415-22. PubMed ID: 1922580
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]