These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

160 related articles for article (PubMed ID: 25972981)

  • 1. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.
    Zhang H; Zhang Y; Xu D; Liu X
    Cogn Neurodyn; 2015 Jun; 9(3):331-40. PubMed ID: 25972981
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks.
    Fan Q; Wu W; Zurada JM
    Springerplus; 2016; 5():295. PubMed ID: 27066332
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Convergence analysis of online gradient method for BP neural networks.
    Wu W; Wang J; Cheng M; Li Z
    Neural Netw; 2011 Jan; 24(1):91-8. PubMed ID: 20870390
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Analysis of boundedness and convergence of online gradient method for two-layer feedforward neural networks.
    Lu Xu ; Jinshu Chen ; Defeng Huang ; Jianhua Lu ; Licai Fang
    IEEE Trans Neural Netw Learn Syst; 2013 Aug; 24(8):1327-38. PubMed ID: 24808571
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks.
    Wang J; Yang J; Wu W
    IEEE Trans Neural Netw; 2011 Aug; 22(8):1297-306. PubMed ID: 21813357
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
    Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y
    Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus.
    Zhang H; Liu X; Xu D; Zhang Y
    Cogn Neurodyn; 2014 Jun; 8(3):261-6. PubMed ID: 24808934
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Deterministic convergence of an online gradient method for BP neural networks.
    Wu W; Feng G; Li Z; Xu Y
    IEEE Trans Neural Netw; 2005 May; 16(3):533-40. PubMed ID: 15940984
    [TBL] [Abstract][Full Text] [Related]  

  • 9. When does online BP training converge?
    Xu ZB; Zhang R; Jing WF
    IEEE Trans Neural Netw; 2009 Oct; 20(10):1529-39. PubMed ID: 19695997
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application.
    Zhang B; Liu Y; Cao J; Wu S; Wang J
    Neural Netw; 2019 Jul; 115():50-64. PubMed ID: 30974301
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Magnified gradient function with deterministic weight modification in adaptive learning.
    Ng SC; Cheung CC; Leung SH
    IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Convergence of gradient method with momentum for two-layer feedforward neural networks.
    Zhang N; Wu W; Zheng G
    IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Boundedness and convergence of online gradient method with penalty for feedforward neural networks.
    Zhang H; Wu W; Liu F; Yao M
    IEEE Trans Neural Netw; 2009 Jun; 20(6):1050-4. PubMed ID: 19435681
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Training pi-sigma network by online gradient algorithm with penalty for small weight update.
    Xiong Y; Wu W; Kang X; Zhang C
    Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Boundedness and convergence analysis of weight elimination for cyclic training of neural networks.
    Wang J; Ye Z; Gao W; Zurada JM
    Neural Netw; 2016 Oct; 82():49-61. PubMed ID: 27472447
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A linear recurrent kernel online learning algorithm with sparse updates.
    Fan H; Song Q
    Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Optimization-based learning with bounded error for feedforward neural networks.
    Alessandri A; Sanguineti M; Maggiore M
    IEEE Trans Neural Netw; 2002; 13(2):261-73. PubMed ID: 18244429
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm.
    Abid S; Fnaiech F; Najim M
    IEEE Trans Neural Netw; 2001; 12(2):424-30. PubMed ID: 18244397
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Neural networks and chaos: construction, evaluation of chaotic networks, and prediction of chaos with multilayer feedforward networks.
    Bahi JM; Couchot JF; Guyeux C; Salomon M
    Chaos; 2012 Mar; 22(1):013122. PubMed ID: 22462998
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.