These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

94 related articles for article (PubMed ID: 18267852)

  • 1. Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training.
    Murray AF; Edwards PJ
    IEEE Trans Neural Netw; 1994; 5(5):792-802. PubMed ID: 18267852
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Analogue synaptic noise--implications and learning improvements.
    Edwards PJ; Murray AF
    Int J Neural Syst; 1993 Dec; 4(4):427-33. PubMed ID: 8049804
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements.
    Murray AF; Edwards PJ
    IEEE Trans Neural Netw; 1993; 4(4):722-5. PubMed ID: 18267774
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Objective functions of online weight noise injection training algorithms for MLPs.
    Ho K; Leung CS; Sum J
    IEEE Trans Neural Netw; 2011 Feb; 22(2):317-23. PubMed ID: 21189237
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance?
    Edwards PJ; Murray AF
    Int J Neural Syst; 1995 Dec; 6(4):401-16. PubMed ID: 8963469
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Neural Comput; 2000 Dec; 12(12):2941-64. PubMed ID: 11112261
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A probabilistic model for the fault tolerance of multilayer perceptrons.
    Merchawi NS; Kumara ST; Das CR
    IEEE Trans Neural Netw; 1996; 7(1):201-5. PubMed ID: 18255571
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.
    Sum J; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2619-2632. PubMed ID: 34487503
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.
    Ho KI; Leung CS; Sum J
    IEEE Trans Neural Netw; 2010 Jun; 21(6):938-47. PubMed ID: 20388593
    [TBL] [Abstract][Full Text] [Related]  

  • 10. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.
    Sum JP; Leung CS; Ho KI
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):211-22. PubMed ID: 24808501
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Distributed fault tolerance in optimal interpolative nets.
    Simon D
    IEEE Trans Neural Netw; 2001; 12(6):1348-57. PubMed ID: 18249964
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks.
    Cavalieri S; Mirabella O
    Neural Netw; 1999 Jan; 12(1):91-106. PubMed ID: 12662719
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A new measurement of noise immunity and generalization ability for MLPs.
    Bernier JL; Ortega J; Ros E; Rojas I; Prieto A
    Int J Neural Syst; 1999 Dec; 9(6):511-21. PubMed ID: 10651334
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Data storage channel equalization using neural networks.
    Nair SK; Moon J
    IEEE Trans Neural Netw; 1997; 8(5):1037-48. PubMed ID: 18255707
    [TBL] [Abstract][Full Text] [Related]  

  • 15. An analysis of noise in recurrent neural networks: convergence and generalization.
    Jim KC; Giles CL; Horne BG
    IEEE Trans Neural Netw; 1996; 7(6):1424-38. PubMed ID: 18263536
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A programmable analog VLSI neural network processor for communication receivers.
    Choi J; Bang SH; Sheu BJ
    IEEE Trans Neural Netw; 1993; 4(3):484-95. PubMed ID: 18267752
    [TBL] [Abstract][Full Text] [Related]  

  • 17. DMP3: a dynamic multilayer perceptron construction algorithm.
    Andersen TL; Martinez TR
    Int J Neural Syst; 2001 Apr; 11(2):145-65. PubMed ID: 14632168
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Two algorithms for neural-network design and training with application to channel equalization.
    Sweatman CZ; Mulgrew B; Gibson GJ
    IEEE Trans Neural Netw; 1998; 9(3):533-43. PubMed ID: 18252477
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application.
    Emmerson MD; Damper RI
    IEEE Trans Neural Netw; 1993; 4(5):788-93. PubMed ID: 18276508
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Hybrid Training Method for MLP: Optimization of Architecture and Training.
    Zanchettin C; Ludermir TB; Almeida LM
    IEEE Trans Syst Man Cybern B Cybern; 2011 Aug; 41(4):1097-109. PubMed ID: 21317085
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.