These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

321 related articles for article (PubMed ID: 16135404)

  • 1. Stability analysis of a three-term backpropagation algorithm.
    Zweiri YH; Seneviratne LD; Althoefer K
    Neural Netw; 2005 Dec; 18(10):1341-7. PubMed ID: 16135404
    [TBL] [Abstract][Full Text] [Related]  

  • 2. New learning automata based algorithms for adaptation of backpropagation algorithm parameters.
    Meybodi MR; Beigy H
    Int J Neural Syst; 2002 Feb; 12(1):45-67. PubMed ID: 11852444
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Convergence analysis of a simple minor component analysis algorithm.
    Peng D; Yi Z; Luo W
    Neural Netw; 2007 Sep; 20(7):842-50. PubMed ID: 17765471
    [TBL] [Abstract][Full Text] [Related]  

  • 4. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L; Kumar S; Patnaik A
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [TBL] [Abstract][Full Text] [Related]  

  • 5. TAO-robust backpropagation learning algorithm.
    Pernía-Espinoza AV; Ordieres-Meré JB; Martínez-de-Pisón FJ; González-Marcos A
    Neural Netw; 2005 Mar; 18(2):191-204. PubMed ID: 15795116
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adaptive improved natural gradient algorithm for blind source separation.
    Liu JQ; Feng DZ; Zhang WW
    Neural Comput; 2009 Mar; 21(3):872-89. PubMed ID: 18928362
    [TBL] [Abstract][Full Text] [Related]  

  • 7. An H(∞) control approach to robust learning of feedforward neural networks.
    Jing X
    Neural Netw; 2011 Sep; 24(7):759-66. PubMed ID: 21458228
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Magnified gradient function with deterministic weight modification in adaptive learning.
    Ng SC; Cheung CC; Leung SH
    IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Global convergence of online BP training with dynamic learning rate.
    Zhang R; Xu ZB; Huang GB; Wang D
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):330-41. PubMed ID: 24808511
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Efficient calculation of the Gauss-Newton approximation of the Hessian matrix in neural networks.
    Fairbank M; Alonso E
    Neural Comput; 2012 Mar; 24(3):607-10. PubMed ID: 22168563
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.
    May P; Zhou E; Lee CW
    Neural Netw; 2012 Oct; 34():72-9. PubMed ID: 22842197
    [TBL] [Abstract][Full Text] [Related]  

  • 12. [Extended Kalman filtering trained neural networks and multicomponent analysis of amino acids].
    Li Z; Matsumoto S; Yu B; Sakai M; Li ML
    Guang Pu Xue Yu Guang Pu Fen Xi; 1997 Jun; 17(3):123-6. PubMed ID: 15810234
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Evolutionary product unit based neural networks for regression.
    Martínez-Estudillo A; Martínez-Estudillo F; Hervás-Martínez C; García-Pedrajas N
    Neural Netw; 2006 May; 19(4):477-86. PubMed ID: 16481148
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.
    Lin WQ; Jiang JH; Zhou YP; Wu HL; Shen GL; Yu RQ
    J Comput Chem; 2007 Jan; 28(2):519-27. PubMed ID: 17186488
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A self-stabilizing MSA algorithm in high-dimension data stream.
    Kong X; Hu C; Han C
    Neural Netw; 2010 Sep; 23(7):865-71. PubMed ID: 20452742
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A modified error backpropagation algorithm for complex-value neural networks.
    Chen X; Tang Z; Variappan C; Li S; Okada T
    Int J Neural Syst; 2005 Dec; 15(6):435-43. PubMed ID: 16385633
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Evolving logic networks with real-valued inputs for fast incremental learning.
    Park MS; Choi JY
    IEEE Trans Syst Man Cybern B Cybern; 2009 Feb; 39(1):254-67. PubMed ID: 19068435
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Significant vector learning to construct sparse kernel regression models.
    Gao J; Shi D; Liu X
    Neural Netw; 2007 Sep; 20(7):791-8. PubMed ID: 17604953
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.
    Xu D; Zhang H; Liu L
    Neural Comput; 2010 Oct; 22(10):2655-77. PubMed ID: 20608871
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 17.