These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 10651334)

  • 1. A new measurement of noise immunity and generalization ability for MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Int J Neural Syst; 1999 Dec; 9(6):511-21. PubMed ID: 10651334
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Neural Comput; 2000 Dec; 12(12):2941-64. PubMed ID: 11112261
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Approximate Bayesian MLP regularization for regression in the presence of noise.
    Park JG; Jo S
    Neural Netw; 2016 Nov; 83():75-85. PubMed ID: 27584575
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Fast accurate MEG source localization using a multilayer perceptron trained with real brain noise.
    Jun SC; Pearlmutter BA; Nolte G
    Phys Med Biol; 2002 Jul; 47(14):2547-60. PubMed ID: 12171339
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Objective functions of online weight noise injection training algorithms for MLPs.
    Ho K; Leung CS; Sum J
    IEEE Trans Neural Netw; 2011 Feb; 22(2):317-23. PubMed ID: 21189237
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.
    Sum J; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2619-2632. PubMed ID: 34487503
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Multilayer perceptron classification of unknown volatile chemicals from the firing rates of insect olfactory sensory neurons and its application to biosensor design.
    Bachtiar LR; Unsworth CP; Newcomb RD; Crampin EJ
    Neural Comput; 2013 Jan; 25(1):259-87. PubMed ID: 23020109
    [TBL] [Abstract][Full Text] [Related]  

  • 8. MEG source localization using an MLP with a distributed output representation.
    Jun SC; Pearlmutter BA; Nolte G
    IEEE Trans Biomed Eng; 2003 Jun; 50(6):786-9. PubMed ID: 12814246
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Neural architecture design based on extreme learning machine.
    Bueno-Crespo A; García-Laencina PJ; Sancho-Gómez JL
    Neural Netw; 2013 Dec; 48():19-24. PubMed ID: 23892908
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Feature selection for MLP neural network: the use of random permutation of probabilistic outputs.
    Yang JB; Shen KQ; Ong CJ; Li XP
    IEEE Trans Neural Netw; 2009 Dec; 20(12):1911-22. PubMed ID: 19822474
    [TBL] [Abstract][Full Text] [Related]  

  • 11. DMP3: a dynamic multilayer perceptron construction algorithm.
    Andersen TL; Martinez TR
    Int J Neural Syst; 2001 Apr; 11(2):145-65. PubMed ID: 14632168
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A feed-forward network for input that is both categorical and quantitative.
    Brouwer RK
    Neural Netw; 2002 Sep; 15(7):881-90. PubMed ID: 14672165
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Upper bound of the expected training error of neural network regression for a Gaussian noise sequence.
    Hagiwara K; Hayasaka T; Toda N; Usui S; Kuno K
    Neural Netw; 2001 Dec; 14(10):1419-29. PubMed ID: 11771721
    [TBL] [Abstract][Full Text] [Related]  

  • 14. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.
    Sum JP; Leung CS; Ho KI
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):211-22. PubMed ID: 24808501
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Channel selection and classification of electroencephalogram signals: an artificial neural network and genetic algorithm-based approach.
    Yang J; Singh H; Hines EL; Schlaghecken F; Iliescu DD; Leeson MS; Stocks NG
    Artif Intell Med; 2012 Jun; 55(2):117-26. PubMed ID: 22503644
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A new formulation for feedforward neural networks.
    Razavi S; Tolson BA
    IEEE Trans Neural Netw; 2011 Oct; 22(10):1588-98. PubMed ID: 21859600
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Optimized approximation algorithm in neural networks without overfitting.
    Liu Y; Starzyk JA; Zhu Z
    IEEE Trans Neural Netw; 2008 Jun; 19(6):983-95. PubMed ID: 18541499
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Sensitivity analysis of multilayer perceptron with differentiable activation functions.
    Choi JY; Choi CH
    IEEE Trans Neural Netw; 1992; 3(1):101-7. PubMed ID: 18276410
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.