These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
97 related articles for article (PubMed ID: 18255571)
1. A probabilistic model for the fault tolerance of multilayer perceptrons. Merchawi NS; Kumara ST; Das CR IEEE Trans Neural Netw; 1996; 7(1):201-5. PubMed ID: 18255571 [TBL] [Abstract][Full Text] [Related]
2. Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application. Emmerson MD; Damper RI IEEE Trans Neural Netw; 1993; 4(5):788-93. PubMed ID: 18276508 [TBL] [Abstract][Full Text] [Related]
3. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis. Sum JP; Leung CS; Ho KI IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):211-22. PubMed ID: 24808501 [TBL] [Abstract][Full Text] [Related]
4. Multilayer Potts perceptrons with Levenberg-Marquardt learning. Wu JM IEEE Trans Neural Netw; 2008 Dec; 19(12):2032-43. PubMed ID: 19054728 [TBL] [Abstract][Full Text] [Related]
5. A quantified sensitivity measure for multilayer perceptron to input perturbation. Zeng X; Yeung DS Neural Comput; 2003 Jan; 15(1):183-212. PubMed ID: 12590825 [TBL] [Abstract][Full Text] [Related]
6. Local linear perceptrons for classification. Alpaydin E; Jordan MI IEEE Trans Neural Netw; 1996; 7(3):788-94. PubMed ID: 18263476 [TBL] [Abstract][Full Text] [Related]
7. Sensitivity analysis of multilayer perceptron to input and weight perturbations. Zeng X; Yeung DS IEEE Trans Neural Netw; 2001; 12(6):1358-66. PubMed ID: 18249965 [TBL] [Abstract][Full Text] [Related]
8. Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements. Murray AF; Edwards PJ IEEE Trans Neural Netw; 1993; 4(4):722-5. PubMed ID: 18267774 [TBL] [Abstract][Full Text] [Related]
9. Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. Murray AF; Edwards PJ IEEE Trans Neural Netw; 1994; 5(5):792-802. PubMed ID: 18267852 [TBL] [Abstract][Full Text] [Related]
10. Performance evaluation of multilayer perceptrons in signal detection and classification. Michalopoulou ZH; Nolte LW; Alexandrou D IEEE Trans Neural Netw; 1995; 6(2):381-6. PubMed ID: 18263320 [TBL] [Abstract][Full Text] [Related]
11. Sensitivity analysis of multilayer perceptron with differentiable activation functions. Choi JY; Choi CH IEEE Trans Neural Netw; 1992; 3(1):101-7. PubMed ID: 18276410 [TBL] [Abstract][Full Text] [Related]
12. A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Cavalieri S; Mirabella O Neural Netw; 1999 Jan; 12(1):91-106. PubMed ID: 12662719 [TBL] [Abstract][Full Text] [Related]
13. Distributed fault tolerance in optimal interpolative nets. Simon D IEEE Trans Neural Netw; 2001; 12(6):1348-57. PubMed ID: 18249964 [TBL] [Abstract][Full Text] [Related]
14. Query-based learning applied to partially trained multilayer perceptrons. Hwang JN; Choi JJ; Oh S; Marks RJ IEEE Trans Neural Netw; 1991; 2(1):131-6. PubMed ID: 18276359 [TBL] [Abstract][Full Text] [Related]
15. Higher-order probabilistic perceptrons as Bayesian inference engines. Clark JW; Gernoth KA; Dittmar S; Ristig ML Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics; 1999 May; 59(5 Pt B):6161-74. PubMed ID: 11969601 [TBL] [Abstract][Full Text] [Related]
16. On the initialization and optimization of multilayer perceptrons. Weymaere N; Martens JP IEEE Trans Neural Netw; 1994; 5(5):738-51. PubMed ID: 18267848 [TBL] [Abstract][Full Text] [Related]
18. Feature selection for MLP neural network: the use of random permutation of probabilistic outputs. Yang JB; Shen KQ; Ong CJ; Li XP IEEE Trans Neural Netw; 2009 Dec; 20(12):1911-22. PubMed ID: 19822474 [TBL] [Abstract][Full Text] [Related]
19. Multilayer perceptron, fuzzy sets, and classification. Pal SK; Mitra S IEEE Trans Neural Netw; 1992; 3(5):683-97. PubMed ID: 18276468 [TBL] [Abstract][Full Text] [Related]
20. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Auer P; Burgsteiner H; Maass W Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]