These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

93 related articles for article (PubMed ID: 18282868)

  • 41. An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.
    Ergezinger S; Thomsen E
    IEEE Trans Neural Netw; 1995; 6(1):31-42. PubMed ID: 18263283
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Stable dynamic backpropagation learning in recurrent neural networks.
    Jin L; Gupta MM
    IEEE Trans Neural Netw; 1999; 10(6):1321-34. PubMed ID: 18252634
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Using random weights to train multilayer networks of hard-limiting units.
    Barlett PL; Downs T
    IEEE Trans Neural Netw; 1992; 3(2):202-10. PubMed ID: 18276421
    [TBL] [Abstract][Full Text] [Related]  

  • 44. FPGA implementation of a pyramidal Weightless Neural Networks learning system.
    Al-Alawi R
    Int J Neural Syst; 2003 Aug; 13(4):225-37. PubMed ID: 12964210
    [TBL] [Abstract][Full Text] [Related]  

  • 45. TAO-robust backpropagation learning algorithm.
    Pernía-Espinoza AV; Ordieres-Meré JB; Martínez-de-Pisón FJ; González-Marcos A
    Neural Netw; 2005 Mar; 18(2):191-204. PubMed ID: 15795116
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Relative loss bounds for single neurons.
    Helmbold DP; Kivinen J; Warmuth MK
    IEEE Trans Neural Netw; 1999; 10(6):1291-304. PubMed ID: 18252631
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Parameter incremental learning algorithm for neural networks.
    Wan S; Banta LE
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1424-38. PubMed ID: 17131658
    [TBL] [Abstract][Full Text] [Related]  

  • 48. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks.
    Man Z; Wu HR; Liu S; Yu X
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1580-91. PubMed ID: 17131670
    [TBL] [Abstract][Full Text] [Related]  

  • 49. The geometrical learning of binary neural networks.
    Kim JH; Park SK
    IEEE Trans Neural Netw; 1995; 6(1):237-47. PubMed ID: 18263303
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Intelligent optimal control with dynamic neural networks.
    Becerikli Y; Konar AF; Samad T
    Neural Netw; 2003 Mar; 16(2):251-9. PubMed ID: 12628610
    [TBL] [Abstract][Full Text] [Related]  

  • 51. ANASA-a stochastic reinforcement algorithm for real-valued neural computation.
    Vasilakos AV; Loukas NH
    IEEE Trans Neural Netw; 1996; 7(4):830-42. PubMed ID: 18263479
    [TBL] [Abstract][Full Text] [Related]  

  • 52. An Extension of the Back-Propagation Algorithm to Complex Numbers.
    Nitta T
    Neural Netw; 1997 Nov; 10(8):1391-1415. PubMed ID: 12662482
    [TBL] [Abstract][Full Text] [Related]  

  • 53. Hinfinity-learning of layered neural networks.
    Nishiyama K; Suzuki K
    IEEE Trans Neural Netw; 2001; 12(6):1265-77. PubMed ID: 18249956
    [TBL] [Abstract][Full Text] [Related]  

  • 54. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L; Kumar S; Patnaik A
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Extended least squares based algorithm for training feedforward networks.
    Yam JF; Chow TS
    IEEE Trans Neural Netw; 1997; 8(3):806-10. PubMed ID: 18255683
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Training feedforward networks with the Marquardt algorithm.
    Hagan MT; Menhaj MB
    IEEE Trans Neural Netw; 1994; 5(6):989-93. PubMed ID: 18267874
    [TBL] [Abstract][Full Text] [Related]  

  • 57. An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.
    Huynh HT; Won Y; Kim JJ
    Int J Neural Syst; 2008 Oct; 18(5):433-41. PubMed ID: 18991365
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.
    Jankovic M; Ogawa H
    Int J Neural Syst; 2004 Oct; 14(5):313-23. PubMed ID: 15593379
    [TBL] [Abstract][Full Text] [Related]  

  • 59. The Dropout Learning Algorithm.
    Baldi P; Sadowski P
    Artif Intell; 2014 May; 210():78-122. PubMed ID: 24771879
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Training a single sigmoidal neuron is hard.
    Síma J
    Neural Comput; 2002 Nov; 14(11):2709-28. PubMed ID: 12433296
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 5.