These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

143 related articles for article (PubMed ID: 31167300)

  • 1. On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks.
    Luo ZQ
    Neural Comput; 1991; 3(2):226-245. PubMed ID: 31167300
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Statistical efficiency of adaptive algorithms.
    Widrow B; Kamenetsky M
    Neural Netw; 2003; 16(5-6):735-44. PubMed ID: 12850029
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A linear recurrent kernel online learning algorithm with sparse updates.
    Fan H; Song Q
    Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.
    Zhang H; Zhang Y; Xu D; Liu X
    Cogn Neurodyn; 2015 Jun; 9(3):331-40. PubMed ID: 25972981
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Learning and convergence analysis of neural-type structured networks.
    Polycarpou MM; Ioannou PA
    IEEE Trans Neural Netw; 1992; 3(1):39-50. PubMed ID: 18276404
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A local linearized least squares algorithm for training feedforward neural networks.
    Stan O; Kamen E
    IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adaptive complex-valued stepsize based fast learning of complex-valued neural networks.
    Zhang Y; Huang H
    Neural Netw; 2020 Apr; 124():233-242. PubMed ID: 32018161
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Analysis of boundedness and convergence of online gradient method for two-layer feedforward neural networks.
    Lu Xu ; Jinshu Chen ; Defeng Huang ; Jianhua Lu ; Licai Fang
    IEEE Trans Neural Netw Learn Syst; 2013 Aug; 24(8):1327-38. PubMed ID: 24808571
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Robust adaptive learning of feedforward neural networks via LMI optimizations.
    Jing X
    Neural Netw; 2012 Jul; 31():33-45. PubMed ID: 22459273
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm.
    Treadgold NK; Gedeon TD
    IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489
    [TBL] [Abstract][Full Text] [Related]  

  • 12. On the weight convergence of Elman networks.
    Song Q
    IEEE Trans Neural Netw; 2010 Mar; 21(3):463-80. PubMed ID: 20129857
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Extended least squares based algorithm for training feedforward networks.
    Yam JF; Chow TS
    IEEE Trans Neural Netw; 1997; 8(3):806-10. PubMed ID: 18255683
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A parallel genetic/neural network learning algorithm for MIMD shared memory machines.
    Hung SL; Adeli H
    IEEE Trans Neural Netw; 1994; 5(6):900-9. PubMed ID: 18267864
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks.
    Wang J; Yang J; Wu W
    IEEE Trans Neural Netw; 2011 Aug; 22(8):1297-306. PubMed ID: 21813357
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations.
    Karayiannis NB
    IEEE Trans Neural Netw; 1996; 7(2):419-26. PubMed ID: 18255595
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Dynamic optimal learning rates of a certain class of fuzzy neural networks and its applications with genetic algorithm.
    Wang CH; Liu HL; Lin CT
    IEEE Trans Syst Man Cybern B Cybern; 2001; 31(3):467-75. PubMed ID: 18244813
    [TBL] [Abstract][Full Text] [Related]  

  • 18. LMS learning algorithms: misconceptions and new results on converence.
    Wang ZQ; Manry MT; Schiano JL
    IEEE Trans Neural Netw; 2000; 11(1):47-56. PubMed ID: 18249738
    [TBL] [Abstract][Full Text] [Related]  

  • 19. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L; Kumar S; Patnaik A
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Boundedness and convergence analysis of weight elimination for cyclic training of neural networks.
    Wang J; Ye Z; Gao W; Zurada JM
    Neural Netw; 2016 Oct; 82():49-61. PubMed ID: 27472447
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.