These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 8963469)

  • 1. Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance?
    Edwards PJ; Murray AF
    Int J Neural Syst; 1995 Dec; 6(4):401-16. PubMed ID: 8963469
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.
    Ho KI; Leung CS; Sum J
    IEEE Trans Neural Netw; 2010 Jun; 21(6):938-47. PubMed ID: 20388593
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Analogue synaptic noise--implications and learning improvements.
    Edwards PJ; Murray AF
    Int J Neural Syst; 1993 Dec; 4(4):427-33. PubMed ID: 8049804
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm.
    Su F; Yuan P; Wang Y; Zhang C
    Protein Cell; 2016 Oct; 7(10):735-748. PubMed ID: 27502185
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Toward optimally distributed computation.
    Edwards PJ; Murray AF
    Neural Comput; 1998 May; 10(4):987-1005. PubMed ID: 9573416
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Distributed fault tolerance in optimal interpolative nets.
    Simon D
    IEEE Trans Neural Netw; 2001; 12(6):1348-57. PubMed ID: 18249964
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Weight Noise Injection-Based MLPs With Group Lasso Penalty: Asymptotic Convergence and Application to Node Pruning.
    Wang J; Chang Q; Chang Q; Liu Y; Pal NR
    IEEE Trans Cybern; 2019 Dec; 49(12):4346-4364. PubMed ID: 30530381
    [TBL] [Abstract][Full Text] [Related]  

  • 8. On the selection of weight decay parameter for faulty networks.
    Leung CS; Wang HJ; Sum J
    IEEE Trans Neural Netw; 2010 Aug; 21(8):1232-44. PubMed ID: 20682468
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training.
    Murray AF; Edwards PJ
    IEEE Trans Neural Netw; 1994; 5(5):792-802. PubMed ID: 18267852
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Objective functions of online weight noise injection training algorithms for MLPs.
    Ho K; Leung CS; Sum J
    IEEE Trans Neural Netw; 2011 Feb; 22(2):317-23. PubMed ID: 21189237
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Feedforward sigmoidal networks--equicontinuity and fault-tolerance properties.
    Chandra P; Singh Y
    IEEE Trans Neural Netw; 2004 Nov; 15(6):1350-66. PubMed ID: 15565765
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A fault-tolerant regularizer for RBF networks.
    Leung CS; Sum JP
    IEEE Trans Neural Netw; 2008 Mar; 19(3):493-507. PubMed ID: 18334367
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Effects of fast presynaptic noise in attractor neural networks.
    Cortes JM; Torres JJ; Marro J; Garrido PL; Kappen HJ
    Neural Comput; 2006 Mar; 18(3):614-33. PubMed ID: 16483410
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Unsupervised learning and recall of temporal sequences: an application to robotics.
    Barretto GA; Araújo AF
    Int J Neural Syst; 1999 Jun; 9(3):235-42. PubMed ID: 10560763
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Fault detection and diagnosis for non-Gaussian stochastic distribution systems with time delays via RBF neural networks.
    Yi Q; Zhan-ming L; Er-chao L
    ISA Trans; 2012 Nov; 51(6):786-91. PubMed ID: 22902083
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Maximally fault tolerant neural networks.
    Neti C; Schneider MH; Young ED
    IEEE Trans Neural Netw; 1992; 3(1):14-23. PubMed ID: 18276402
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A penalty-function approach for pruning feedforward neural networks.
    Setiono R
    Neural Comput; 1997 Jan; 9(1):185-204. PubMed ID: 9117898
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.
    Sum JP; Leung CS; Ho KI
    IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):211-22. PubMed ID: 24808501
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Investigating the fault tolerance of neural networks.
    Tchernev EB; Mulvaney RG; Phatak DS
    Neural Comput; 2005 Jul; 17(7):1646-64. PubMed ID: 15901410
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Training pi-sigma network by online gradient algorithm with penalty for small weight update.
    Xiong Y; Wu W; Kang X; Zhang C
    Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.