These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

126 related articles for article (PubMed ID: 12662484)

  • 1. Three Methods to Speed up the Training of Feedforward and Feedback Perceptrons.
    Agarwal M; Stäger F
    Neural Netw; 1997 Nov; 10(8):1435-1443. PubMed ID: 12662484
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Fast training of multilayer perceptrons.
    Verma B
    IEEE Trans Neural Netw; 1997; 8(6):1314-20. PubMed ID: 18255733
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Dynamic tunneling technique for efficient training of multilayer perceptrons.
    RoyChowdhury P; Singh YP; Chansarkar RA
    IEEE Trans Neural Netw; 1999; 10(1):48-55. PubMed ID: 18252502
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 6. An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.
    Ergezinger S; Thomsen E
    IEEE Trans Neural Netw; 1995; 6(1):31-42. PubMed ID: 18263283
    [TBL] [Abstract][Full Text] [Related]  

  • 7. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.
    Widrow B; Greenblatt A; Kim Y; Park D
    Neural Netw; 2013 Jan; 37():182-8. PubMed ID: 23140797
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Comments on "An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer".
    van Milligen BP; Tribaldos V; Jiménez JA; Santa Cruz C
    IEEE Trans Neural Netw; 1998; 9(2):339-41. PubMed ID: 18252457
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Three learning phases for radial-basis-function networks.
    Schwenker F; Kestler HA; Palm G
    Neural Netw; 2001 May; 14(4-5):439-58. PubMed ID: 11411631
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.
    Huynh HT; Won Y; Kim JJ
    Int J Neural Syst; 2008 Oct; 18(5):433-41. PubMed ID: 18991365
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Combinatorial evolution of regression nodes in feedforward neural networks.
    Schmitz GP; Aldrich C
    Neural Netw; 1999 Jan; 12(1):175-189. PubMed ID: 12662726
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Multifeedback-layer neural network.
    Savran A
    IEEE Trans Neural Netw; 2007 Mar; 18(2):373-84. PubMed ID: 17385626
    [TBL] [Abstract][Full Text] [Related]  

  • 13. On the initialization and optimization of multilayer perceptrons.
    Weymaere N; Martens JP
    IEEE Trans Neural Netw; 1994; 5(5):738-51. PubMed ID: 18267848
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Evolutionary optimization framework to train multilayer perceptrons for engineering applications.
    Al-Hajj R; Fouad MM; Zeki M
    Math Biosci Eng; 2024 Jan; 21(2):2970-2990. PubMed ID: 38454715
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Methods of training and constructing multilayer perceptrons with arbitrary pattern sets.
    Liang X; Xia S
    Int J Neural Syst; 1995 Sep; 6(3):233-47. PubMed ID: 8589861
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A fast multilayer neural-network training algorithm based on the layer-by-layer optimizing procedures.
    Wang GJ; Chen CC
    IEEE Trans Neural Netw; 1996; 7(3):768-75. PubMed ID: 18263473
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.
    Ghaffari A; Abdollahi H; Khoshayand MR; Bozchalooi IS; Dadgar A; Rafiee-Tehrani M
    Int J Pharm; 2006 Dec; 327(1-2):126-38. PubMed ID: 16959449
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks.
    Jabri M; Flower B
    Neural Comput; 1991; 3(4):546-565. PubMed ID: 31167340
    [TBL] [Abstract][Full Text] [Related]  

  • 19. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A hybrid linear/nonlinear training algorithm for feedforward neural networks.
    McLoone S; Brown MD; Irwin G; Lightbody A
    IEEE Trans Neural Netw; 1998; 9(4):669-84. PubMed ID: 18252490
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.