These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

107 related articles for article (PubMed ID: 18255578)

  • 1. Optimal convergence of on-line backpropagation.
    Gori M; Maggini M
    IEEE Trans Neural Netw; 1996; 7(1):251-4. PubMed ID: 18255578
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.
    Chen F; Chen GR; He G; Xu X; He Q
    IEEE Trans Neural Netw; 2009 Oct; 20(10):1645-58. PubMed ID: 23460987
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Rosenblatt's First Theorem and Frugality of Deep Learning.
    Kirdin A; Sidorov S; Zolotykh N
    Entropy (Basel); 2022 Nov; 24(11):. PubMed ID: 36359726
    [TBL] [Abstract][Full Text] [Related]  

  • 4. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L; Kumar S; Patnaik A
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Learning without local minima in radial basis function networks.
    Bianchini M; Frasconi P; Gori M
    IEEE Trans Neural Netw; 1995; 6(3):749-56. PubMed ID: 18263359
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Backpropagation and ordered derivatives in the time scales calculus.
    Seiffertt J; Wunsch DC
    IEEE Trans Neural Netw; 2010 Aug; 21(8):1262-9. PubMed ID: 20615808
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Backpropagation algorithm adaptation parameters using learning automata.
    Beigy H; Meybodi MR
    Int J Neural Syst; 2001 Jun; 11(3):219-28. PubMed ID: 11574959
    [TBL] [Abstract][Full Text] [Related]  

  • 9. New learning automata based algorithms for adaptation of backpropagation algorithm parameters.
    Meybodi MR; Beigy H
    Int J Neural Syst; 2002 Feb; 12(1):45-67. PubMed ID: 11852444
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A general backpropagation algorithm for feedforward neural networks learning.
    Yu X; Efe MO; Kaynak O
    IEEE Trans Neural Netw; 2002; 13(1):251-4. PubMed ID: 18244427
    [TBL] [Abstract][Full Text] [Related]  

  • 11. The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network.
    Rubanov NS
    IEEE Trans Neural Netw; 2000; 11(2):295-305. PubMed ID: 18249761
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Analysis and test of efficient methods for building recursive deterministic perceptron neural networks.
    Elizondo DA; Birkenhead R; Góngora M; Taillard E; Luyima P
    Neural Netw; 2007 Dec; 20(10):1095-108. PubMed ID: 17904333
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The fractional correction rule: a new perspective.
    Basu M; Liang Q
    Neural Netw; 1998 Aug; 11(6):1027-1039. PubMed ID: 12662772
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Parameter incremental learning algorithm for neural networks.
    Wan S; Banta LE
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1424-38. PubMed ID: 17131658
    [TBL] [Abstract][Full Text] [Related]  

  • 15. New nonleast-squares neural network learning algorithms for hypothesis testing.
    Pados DA; Papantoni-Kazakos P
    IEEE Trans Neural Netw; 1995; 6(3):596-609. PubMed ID: 18263346
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Sensitivity to noise in bidirectional associative memory (BAM).
    Du S; Chen Z; Yuan Z; Zhang X
    IEEE Trans Neural Netw; 2005 Jul; 16(4):887-98. PubMed ID: 16121730
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.
    Zhang H; Zhang Y; Xu D; Liu X
    Cogn Neurodyn; 2015 Jun; 9(3):331-40. PubMed ID: 25972981
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Convergence of stochastic learning in perceptrons with binary synapses.
    Senn W; Fusi S
    Phys Rev E Stat Nonlin Soft Matter Phys; 2005 Jun; 71(6 Pt 1):061907. PubMed ID: 16089765
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks.
    Bello MG
    IEEE Trans Neural Netw; 1992; 3(6):864-75. PubMed ID: 18276484
    [TBL] [Abstract][Full Text] [Related]  

  • 20. On-line learning algorithms for locally recurrent neural networks.
    Campolucci P; Uncini A; Piazza F; Rao BD
    IEEE Trans Neural Netw; 1999; 10(2):253-71. PubMed ID: 18252525
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.