These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

126 related articles for article (PubMed ID: 34487503)

  • 1. Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.
    Sum J; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2619-2632. PubMed ID: 34487503
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.
    Ho KI; Leung CS; Sum J
    IEEE Trans Neural Netw; 2010 Jun; 21(6):938-47. PubMed ID: 20388593
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Limitation of Gradient Descent Learning.
    Sum J; Leung CS; Ho K
    IEEE Trans Neural Netw Learn Syst; 2020 Jun; 31(6):2227-2232. PubMed ID: 31398136
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A fault-tolerant regularizer for RBF networks.
    Leung CS; Sum JP
    IEEE Trans Neural Netw; 2008 Mar; 19(3):493-507. PubMed ID: 18334367
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Objective functions of online weight noise injection training algorithms for MLPs.
    Ho K; Leung CS; Sum J
    IEEE Trans Neural Netw; 2011 Feb; 22(2):317-23. PubMed ID: 21189237
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Weight Noise Injection-Based MLPs With Group Lasso Penalty: Asymptotic Convergence and Application to Node Pruning.
    Wang J; Chang Q; Chang Q; Liu Y; Pal NR
    IEEE Trans Cybern; 2019 Dec; 49(12):4346-4364. PubMed ID: 30530381
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Approximate Bayesian MLP regularization for regression in the presence of noise.
    Park JG; Jo S
    Neural Netw; 2016 Nov; 83():75-85. PubMed ID: 27584575
    [TBL] [Abstract][Full Text] [Related]  

  • 8. RBF networks under the concurrent fault situation.
    Leung CS; Sum JP
    IEEE Trans Neural Netw Learn Syst; 2012 Jul; 23(7):1148-55. PubMed ID: 24807140
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.
    Leung CS; Wan WY; Feng R
    IEEE Trans Neural Netw Learn Syst; 2017 Jun; 28(6):1360-1372. PubMed ID: 28113823
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Objective Function and Learning Algorithm for the General Node Fault Situation.
    Xiao Y; Feng RB; Leung CS; Sum J
    IEEE Trans Neural Netw Learn Syst; 2016 Apr; 27(4):863-74. PubMed ID: 26990391
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise.
    Sum JP; Leung CS; Ho KI
    IEEE Trans Neural Netw; 2009 Jan; 20(1):124-38. PubMed ID: 19109090
    [TBL] [Abstract][Full Text] [Related]  

  • 12. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.
    Sum JP; Leung CS; Ho KI
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):211-22. PubMed ID: 24808501
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A new measurement of noise immunity and generalization ability for MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Int J Neural Syst; 1999 Dec; 9(6):511-21. PubMed ID: 10651334
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Three learning phases for radial-basis-function networks.
    Schwenker F; Kestler HA; Palm G
    Neural Netw; 2001 May; 14(4-5):439-58. PubMed ID: 11411631
    [TBL] [Abstract][Full Text] [Related]  

  • 15. ADMM-Based Algorithm for Training Fault Tolerant RBF Networks and Selecting Centers.
    Wang H; Feng R; Han ZF; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2018 Aug; 29(8):3870-3878. PubMed ID: 28816680
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Neural Comput; 2000 Dec; 12(12):2941-64. PubMed ID: 11112261
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements.
    Murray AF; Edwards PJ
    IEEE Trans Neural Netw; 1993; 4(4):722-5. PubMed ID: 18267774
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On global-local artificial neural networks for function approximation.
    Wedge D; Ingram D; McLean D; Mingham C; Bandar Z
    IEEE Trans Neural Netw; 2006 Jul; 17(4):942-952. PubMed ID: 16856657
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network.
    Poernomo A; Kang DK
    Neural Netw; 2018 Aug; 104():60-67. PubMed ID: 29715684
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Predicting Inpatient Payments Prior to Lower Extremity Arthroplasty Using Deep Learning: Which Model Architecture Is Best?
    Karnuta JM; Navarro SM; Haeberle HS; Helm JM; Kamath AF; Schaffer JL; Krebs VE; Ramkumar PN
    J Arthroplasty; 2019 Oct; 34(10):2235-2241.e1. PubMed ID: 31230954
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.