These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

90 related articles for article (PubMed ID: 12079553)

  • 1. Fast curvature matrix-vector products for second-order gradient descent.
    Schraudolph NN
    Neural Comput; 2002 Jul; 14(7):1723-38. PubMed ID: 12079553
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Efficient calculation of the Gauss-Newton approximation of the Hessian matrix in neural networks.
    Fairbank M; Alonso E
    Neural Comput; 2012 Mar; 24(3):607-10. PubMed ID: 22168563
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A fast and scalable recurrent neural network based on stochastic meta descent.
    Liu Z; Elhanany I
    IEEE Trans Neural Netw; 2008 Sep; 19(9):1652-8. PubMed ID: 18779096
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Second-order stagewise backpropagation for Hessian-matrix analyses and investigation of negative curvature.
    Mizutani E; Dreyfus SE
    Neural Netw; 2008; 21(2-3):193-203. PubMed ID: 18272328
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The general inefficiency of batch training for gradient descent learning.
    Wilson DR; Martinez TR
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Ant colony optimization and stochastic gradient descent.
    Meuleau N; Dorigo M
    Artif Life; 2002; 8(2):103-21. PubMed ID: 12171633
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Learning curves for stochastic gradient descent in linear feedforward networks.
    Werfel J; Xie X; Seung HS
    Neural Comput; 2005 Dec; 17(12):2699-718. PubMed ID: 16212768
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Improving generalization performance of natural gradient learning using optimized regularization by NIC.
    Park H; Murata N; Amari S
    Neural Comput; 2004 Feb; 16(2):355-82. PubMed ID: 15006100
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method.
    Bhaya A; Kaszkurewicz E
    Neural Netw; 2004 Jan; 17(1):65-71. PubMed ID: 14690708
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Natural learning in NLDA networks.
    González A; Dorronsoro JR
    Neural Netw; 2007 Jul; 20(5):610-20. PubMed ID: 17481855
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Limited stochastic meta-descent for kernel-based online learning.
    He W
    Neural Comput; 2009 Sep; 21(9):2667-86. PubMed ID: 19409057
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Robust stability of stochastic delayed additive neural networks with Markovian switching.
    Huang H; Ho DW; Qu Y
    Neural Netw; 2007 Sep; 20(7):799-809. PubMed ID: 17714914
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Subsampled Hessian Newton Methods for Supervised Learning.
    Wang CC; Huang CH; Lin CJ
    Neural Comput; 2015 Aug; 27(8):1766-95. PubMed ID: 26079755
    [TBL] [Abstract][Full Text] [Related]  

  • 14. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.
    Mizutani E; Demmel JW
    Neural Netw; 2003; 16(5-6):745-53. PubMed ID: 12850030
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Asymptotic stability for neural networks with mixed time-delays: the discrete-time case.
    Liu Y; Wang Z; Liu X
    Neural Netw; 2009 Jan; 22(1):67-74. PubMed ID: 19028076
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Neural networks convergence using physicochemical data.
    Karelson M; Dobchev DA; Kulshyn OV; Katritzky AR
    J Chem Inf Model; 2006; 46(5):1891-7. PubMed ID: 16995718
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Improving dimensionality reduction with spectral gradient descent.
    Memisevic R; Hinton G
    Neural Netw; 2005; 18(5-6):702-10. PubMed ID: 16112551
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Improved computation for Levenberg-Marquardt training.
    Wilamowski BM; Yu H
    IEEE Trans Neural Netw; 2010 Jun; 21(6):930-7. PubMed ID: 20409991
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Second order gradient ascent pulse engineering.
    de Fouquieres P; Schirmer SG; Glaser SJ; Kuprov I
    J Magn Reson; 2011 Oct; 212(2):412-7. PubMed ID: 21885306
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Stability Analysis of the Modified Levenberg-Marquardt Algorithm for the Artificial Neural Network Training.
    Rubio JJ
    IEEE Trans Neural Netw Learn Syst; 2021 Aug; 32(8):3510-3524. PubMed ID: 32809947
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.