These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 27066332)

  • 1. Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks.
    Fan Q; Wu W; Zurada JM
    Springerplus; 2016; 5():295. PubMed ID: 27066332
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
    Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y
    Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks.
    Wang J; Yang J; Wu W
    IEEE Trans Neural Netw; 2011 Aug; 22(8):1297-306. PubMed ID: 21813357
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.
    Zhang H; Zhang Y; Xu D; Liu X
    Cogn Neurodyn; 2015 Jun; 9(3):331-40. PubMed ID: 25972981
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Deep convolutional neural network and IoT technology for healthcare.
    Wassan S; Dongyan H; Suhail B; Jhanjhi NZ; Xiao G; Ahmed S; Murugesan RK
    Digit Health; 2024; 10():20552076231220123. PubMed ID: 38250147
    [TBL] [Abstract][Full Text] [Related]  

  • 6. The convergence analysis of SpikeProp algorithm with smoothing L
    Zhao J; Zurada JM; Yang J; Wu W
    Neural Netw; 2018 Jul; 103():19-28. PubMed ID: 29625353
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Convergence of gradient method with momentum for two-layer feedforward neural networks.
    Zhang N; Wu W; Zheng G
    IEEE Trans Neural Netw; 2006 Mar; 17(2):522-5. PubMed ID: 16566479
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Convergence analysis of online gradient method for BP neural networks.
    Wu W; Wang J; Cheng M; Li Z
    Neural Netw; 2011 Jan; 24(1):91-8. PubMed ID: 20870390
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Analysis of Tikhonov regularization for function approximation by neural networks.
    Burger M; Neubauer A
    Neural Netw; 2003 Jan; 16(1):79-90. PubMed ID: 12576108
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks.
    Man Z; Wu HR; Liu S; Yu X
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1580-91. PubMed ID: 17131670
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization.
    Fan Q; Kang Q; Zurada JM; Huang T; Xu D
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; PP():. PubMed ID: 37847629
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Curvature-driven smoothing: a learning algorithm for feedforward networks.
    Bishop CM
    IEEE Trans Neural Netw; 1993; 4(5):882-4. PubMed ID: 18276518
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.
    Liu J; Kong J; Xu D; Qi M; Lu Y
    Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Convergence analysis of sparse TSK fuzzy systems based on spectral Dai-Yuan conjugate gradient and application to high-dimensional feature selection.
    Ji D; Fan Q; Dong Q; Liu Y
    Neural Netw; 2024 Nov; 179():106599. PubMed ID: 39142176
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Data classification based on fractional order gradient descent with momentum for RBF neural network.
    Xue H; Shao Z; Sun H
    Network; 2020; 31(1-4):166-185. PubMed ID: 33283569
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Magnified gradient function with deterministic weight modification in adaptive learning.
    Ng SC; Cheung CC; Leung SH
    IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Neural network for a class of sparse optimization with L
    Wei Z; Li Q; Wei J; Bian W
    Neural Netw; 2022 Jul; 151():211-221. PubMed ID: 35439665
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Smoothing neural network for L
    Li W; Bian W
    Neural Netw; 2021 Nov; 143():678-689. PubMed ID: 34403868
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Training pi-sigma network by online gradient algorithm with penalty for small weight update.
    Xiong Y; Wu W; Kang X; Zhang C
    Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657
    [TBL] [Abstract][Full Text] [Related]  

  • 20. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
    Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D
    Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.