These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

209 related articles for article (PubMed ID: 18252468)

  • 1. Advanced neural-network training algorithm with reduced complexity based on Jacobian deficiency.
    Zhou G; Si J
    IEEE Trans Neural Netw; 1998; 9(3):448-53. PubMed ID: 18252468
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Efficient calculation of the Gauss-Newton approximation of the Hessian matrix in neural networks.
    Fairbank M; Alonso E
    Neural Comput; 2012 Mar; 24(3):607-10. PubMed ID: 22168563
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Training two-layered feedforward networks with variable projection method.
    Kim CT; Lee JJ
    IEEE Trans Neural Netw; 2008 Feb; 19(2):371-5. PubMed ID: 18269969
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A systematic and effective supervised learning mechanism based on Jacobian rank deficiency.
    Zhou G; Si J
    Neural Comput; 1998 May; 10(4):1031-45. PubMed ID: 9573418
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Stability Analysis of the Modified Levenberg-Marquardt Algorithm for the Artificial Neural Network Training.
    Rubio JJ
    IEEE Trans Neural Netw Learn Syst; 2021 Aug; 32(8):3510-3524. PubMed ID: 32809947
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A new Jacobian matrix for optimal learning of single-layer neural networks.
    Peng JX; Li K; Irwin GW
    IEEE Trans Neural Netw; 2008 Jan; 19(1):119-29. PubMed ID: 18269943
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Improved computation for Levenberg-Marquardt training.
    Wilamowski BM; Yu H
    IEEE Trans Neural Netw; 2010 Jun; 21(6):930-7. PubMed ID: 20409991
    [TBL] [Abstract][Full Text] [Related]  

  • 8. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.
    Min Gan ; Chen CLP; Guang-Yong Chen ; Long Chen
    IEEE Trans Cybern; 2018 Oct; 48(10):2866-2874. PubMed ID: 28981436
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Extended least squares based algorithm for training feedforward networks.
    Yam JF; Chow TS
    IEEE Trans Neural Netw; 1997; 8(3):806-10. PubMed ID: 18255683
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Neighborhood based Levenberg-Marquardt algorithm for neural network training.
    Lera G; Pinzolas M
    IEEE Trans Neural Netw; 2002; 13(5):1200-3. PubMed ID: 18244516
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Training feedforward networks with the Marquardt algorithm.
    Hagan MT; Menhaj MB
    IEEE Trans Neural Netw; 1994; 5(6):989-93. PubMed ID: 18267874
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.
    Fu X; Li S; Fairbank M; Wunsch DC; Alonso E
    IEEE Trans Neural Netw Learn Syst; 2015 Sep; 26(9):1900-12. PubMed ID: 25330496
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Improvement of the neighborhood based Levenberg-Marquardt algorithm by local adaptation of the learning coefficient.
    Toledo A; Pinzolas M; Ibarrola JJ; Lera G
    IEEE Trans Neural Netw; 2005 Jul; 16(4):988-92. PubMed ID: 16121740
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Parameter incremental learning algorithm for neural networks.
    Wan S; Banta LE
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1424-38. PubMed ID: 17131658
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Subset-based training and pruning of sigmoid neural networks.
    Zhou G; Si J
    Neural Netw; 1999 Jan; 12(1):79-89. PubMed ID: 12662718
    [TBL] [Abstract][Full Text] [Related]  

  • 17. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.
    Mizutani E; Demmel JW
    Neural Netw; 2003; 16(5-6):745-53. PubMed ID: 12850030
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Neural Network Training With Levenberg-Marquardt and Adaptable Weight Compression.
    Smith JS; Wu B; Wilamowski BM
    IEEE Trans Neural Netw Learn Syst; 2019 Feb; 30(2):580-587. PubMed ID: 29994621
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Supervised training of dynamical neural networks for associative memory design and identification of nonlinear maps.
    Sudharsanan SI; Sundareshan MK
    Int J Neural Syst; 1994 Sep; 5(3):165-80. PubMed ID: 7866623
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Efficient training algorithms for a class of shunting inhibitory convolutional neural networks.
    Tivive FH; Bouzerdoum A
    IEEE Trans Neural Netw; 2005 May; 16(3):541-56. PubMed ID: 15940985
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.