These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 12662633)

  • 1. Accelerating neural network training using weight extrapolations.
    Kamarthi SV; Pittner S
    Neural Netw; 1999 Nov; 12(9):1285-1299. PubMed ID: 12662633
    [TBL] [Abstract][Full Text] [Related]  

  • 2. New learning automata based algorithms for adaptation of backpropagation algorithm parameters.
    Meybodi MR; Beigy H
    Int J Neural Syst; 2002 Feb; 12(1):45-67. PubMed ID: 11852444
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms.
    Yang C; Kim H; Adhikari SP; Chua LO
    Sensors (Basel); 2016 Dec; 17(1):. PubMed ID: 28025566
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Parameter incremental learning algorithm for neural networks.
    Wan S; Banta LE
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1424-38. PubMed ID: 17131658
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural Networks.
    Heravi AR; Abed Hodtani G
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):6252-6263. PubMed ID: 29993752
    [TBL] [Abstract][Full Text] [Related]  

  • 6. On the weight convergence of Elman networks.
    Song Q
    IEEE Trans Neural Netw; 2010 Mar; 21(3):463-80. PubMed ID: 20129857
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Recent advances in the MOBJ algorithm for training artificial neural networks.
    Teixeira RD; Braga AP; Takahashi RH; Saldanha RR
    Int J Neural Syst; 2001 Jun; 11(3):265-70. PubMed ID: 11574964
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stability analysis of a three-term backpropagation algorithm.
    Zweiri YH; Seneviratne LD; Althoefer K
    Neural Netw; 2005 Dec; 18(10):1341-7. PubMed ID: 16135404
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.
    Lin WQ; Jiang JH; Zhou YP; Wu HL; Shen GL; Yu RQ
    J Comput Chem; 2007 Jan; 28(2):519-27. PubMed ID: 17186488
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Efficient mapping of backpropagation algorithm onto a network of workstations.
    Sudhakar V; Siva Ram Murthy C
    IEEE Trans Syst Man Cybern B Cybern; 1998; 28(6):841-8. PubMed ID: 18256002
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A robust backpropagation learning algorithm for function approximation.
    Chen DS; Jain RC
    IEEE Trans Neural Netw; 1994; 5(3):467-79. PubMed ID: 18267813
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm.
    Abid S; Fnaiech F; Najim M
    IEEE Trans Neural Netw; 2001; 12(2):424-30. PubMed ID: 18244397
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Backpropagation algorithm adaptation parameters using learning automata.
    Beigy H; Meybodi MR
    Int J Neural Syst; 2001 Jun; 11(3):219-28. PubMed ID: 11574959
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Leap-frog is a robust algorithm for training neural networks.
    Holm JE; Botha EC
    Network; 1999 Feb; 10(1):1-13. PubMed ID: 10372759
    [TBL] [Abstract][Full Text] [Related]  

  • 16. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Using random weights to train multilayer networks of hard-limiting units.
    Barlett PL; Downs T
    IEEE Trans Neural Netw; 1992; 3(2):202-10. PubMed ID: 18276421
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On-line learning algorithms for locally recurrent neural networks.
    Campolucci P; Uncini A; Piazza F; Rao BD
    IEEE Trans Neural Netw; 1999; 10(2):253-71. PubMed ID: 18252525
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Training feedforward networks with the Marquardt algorithm.
    Hagan MT; Menhaj MB
    IEEE Trans Neural Netw; 1994; 5(6):989-93. PubMed ID: 18267874
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.
    Cao J; Cui H; Shi H; Jiao L
    PLoS One; 2016; 11(6):e0157551. PubMed ID: 27304987
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.