These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 18252502)

  • 1. Dynamic tunneling technique for efficient training of multilayer perceptrons.
    RoyChowdhury P; Singh YP; Chansarkar RA
    IEEE Trans Neural Netw; 1999; 10(1):48-55. PubMed ID: 18252502
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Training trajectories by continuous recurrent multilayer networks.
    Leistritz L; Galicki M; Witte H; Kochs E
    IEEE Trans Neural Netw; 2002; 13(2):283-91. PubMed ID: 18244431
    [TBL] [Abstract][Full Text] [Related]  

  • 3. High-order and multilayer perceptron initialization.
    Thimm G; Fiesler E
    IEEE Trans Neural Netw; 1997; 8(2):349-59. PubMed ID: 18255638
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks.
    Jabri M; Flower B
    IEEE Trans Neural Netw; 1992; 3(1):154-7. PubMed ID: 18276417
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Three Methods to Speed up the Training of Feedforward and Feedback Perceptrons.
    Agarwal M; Stäger F
    Neural Netw; 1997 Nov; 10(8):1435-1443. PubMed ID: 12662484
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Complexity issues in natural gradient descent method for training multilayer perceptrons.
    Yang HH; Amari S
    Neural Comput; 1998 Nov; 10(8):2137-57. PubMed ID: 9804675
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Statistical active learning in multilayer perceptrons.
    Fukumizu K
    IEEE Trans Neural Netw; 2000; 11(1):17-26. PubMed ID: 18249735
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks.
    Jabri M; Flower B
    Neural Comput; 1991; 3(4):546-565. PubMed ID: 31167340
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Evolutionary optimization framework to train multilayer perceptrons for engineering applications.
    Al-Hajj R; Fouad MM; Zeki M
    Math Biosci Eng; 2024 Jan; 21(2):2970-2990. PubMed ID: 38454715
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Combinatorial evolution of regression nodes in feedforward neural networks.
    Schmitz GP; Aldrich C
    Neural Netw; 1999 Jan; 12(1):175-189. PubMed ID: 12662726
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On the initialization and optimization of multilayer perceptrons.
    Weymaere N; Martens JP
    IEEE Trans Neural Netw; 1994; 5(5):738-51. PubMed ID: 18267848
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Efficient training of multilayer perceptrons using principal component analysis.
    Bunzmann C; Biehl M; Urbanczik R
    Phys Rev E Stat Nonlin Soft Matter Phys; 2005 Aug; 72(2 Pt 2):026117. PubMed ID: 16196654
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Fast training of multilayer perceptrons.
    Verma B
    IEEE Trans Neural Netw; 1997; 8(6):1314-20. PubMed ID: 18255733
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Inverting feedforward neural networks using linear and nonlinear programming.
    Lu BL; Kita H; Nishikawa Y
    IEEE Trans Neural Netw; 1999; 10(6):1271-90. PubMed ID: 18252630
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm.
    Treadgold NK; Gedeon TD
    IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 18. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks.
    Patra JC; Kot AC
    IEEE Trans Syst Man Cybern B Cybern; 2002; 32(4):505-11. PubMed ID: 18238146
    [TBL] [Abstract][Full Text] [Related]  

  • 20. An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.
    Ergezinger S; Thomsen E
    IEEE Trans Neural Netw; 1995; 6(1):31-42. PubMed ID: 18263283
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.