These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Journal Abstract Search


237 related items for PubMed ID: 24808511

  • 1. Global convergence of online BP training with dynamic learning rate.
    Zhang R, Xu ZB, Huang GB, Wang D.
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):330-41. PubMed ID: 24808511
    [Abstract] [Full Text] [Related]

  • 2. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L, Kumar S, Patnaik A.
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [Abstract] [Full Text] [Related]

  • 3. When does online BP training converge?
    Xu ZB, Zhang R, Jing WF.
    IEEE Trans Neural Netw; 2009 Oct; 20(10):1529-39. PubMed ID: 19695997
    [Abstract] [Full Text] [Related]

  • 4. Magnified gradient function with deterministic weight modification in adaptive learning.
    Ng SC, Cheung CC, Leung SH.
    IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769
    [Abstract] [Full Text] [Related]

  • 5. On the weight convergence of Elman networks.
    Song Q.
    IEEE Trans Neural Netw; 2010 Mar; 21(3):463-80. PubMed ID: 20129857
    [Abstract] [Full Text] [Related]

  • 6.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 7. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.
    Tan WW.
    ISA Trans; 2007 Apr; 46(2):181-8. PubMed ID: 17337268
    [Abstract] [Full Text] [Related]

  • 8.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 9.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 10. Adaptive computation algorithm for RBF neural network.
    Han HG, Qiao JF.
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):342-7. PubMed ID: 24808512
    [Abstract] [Full Text] [Related]

  • 11.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 12.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 13. Analysis of boundedness and convergence of online gradient method for two-layer feedforward neural networks.
    Lu Xu, Jinshu Chen, Defeng Huang, Jianhua Lu, Licai Fang.
    IEEE Trans Neural Netw Learn Syst; 2013 Aug; 24(8):1327-38. PubMed ID: 24808571
    [Abstract] [Full Text] [Related]

  • 14. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q, Wu Y, Soh YC.
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [Abstract] [Full Text] [Related]

  • 15.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 16.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 17. Convergence of gradient method with momentum for two-layer feedforward neural networks.
    Zhang N, Wu W, Zheng G.
    IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479
    [Abstract] [Full Text] [Related]

  • 18. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks.
    Man Z, Wu HR, Liu S, Yu X.
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1580-91. PubMed ID: 17131670
    [Abstract] [Full Text] [Related]

  • 19.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 20. A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural Networks.
    Heravi AR, Abed Hodtani G.
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):6252-6263. PubMed ID: 29993752
    [Abstract] [Full Text] [Related]


    Page: [Next] [New Search]
    of 12.