These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

176 related articles for article (PubMed ID: 18252489)

  • 1. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm.
    Treadgold NK; Gedeon TD
    IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A medical diagnostic tool based on radial basis function classifiers and evolutionary simulated annealing.
    Alexandridis A; Chondrodima E
    J Biomed Inform; 2014 Jun; 49():61-72. PubMed ID: 24662274
    [TBL] [Abstract][Full Text] [Related]  

  • 3. The annealing robust backpropagation (ARBP) learning algorithm.
    Chuang CC; Su SF; Hsiao CC
    IEEE Trans Neural Netw; 2000; 11(5):1067-77. PubMed ID: 18249835
    [TBL] [Abstract][Full Text] [Related]  

  • 4. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adaptive hybrid learning for neural networks.
    Smithies R; Salhi S; Queen N
    Neural Comput; 2004 Jan; 16(1):139-57. PubMed ID: 15006027
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Backpropagation algorithm adaptation parameters using learning automata.
    Beigy H; Meybodi MR
    Int J Neural Syst; 2001 Jun; 11(3):219-28. PubMed ID: 11574959
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 8. New learning automata based algorithms for adaptation of backpropagation algorithm parameters.
    Meybodi MR; Beigy H
    Int J Neural Syst; 2002 Feb; 12(1):45-67. PubMed ID: 11852444
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Hardware prototypes of a Boolean neural network and the simulated annealing optimization method.
    Niittylahti J
    Int J Neural Syst; 1996 Mar; 7(1):45-52. PubMed ID: 8828049
    [TBL] [Abstract][Full Text] [Related]  

  • 10. On the problem of local minima in recurrent neural networks.
    Bianchini M; Gori M; Maggini M
    IEEE Trans Neural Netw; 1994; 5(2):167-77. PubMed ID: 18267788
    [TBL] [Abstract][Full Text] [Related]  

  • 11. An efficient constrained training algorithm for feedforward networks.
    Karras DA; Perantonis SJ
    IEEE Trans Neural Netw; 1995; 6(6):1420-34. PubMed ID: 18263435
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.
    Sheng Z; Wang J; Zhou S; Zhou B
    Chaos; 2014 Mar; 24(1):013133. PubMed ID: 24697395
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Simulated annealing applied to IMRT beam angle optimization: A computational study.
    Dias J; Rocha H; Ferreira B; Lopes Mdo C
    Phys Med; 2015 Nov; 31(7):747-56. PubMed ID: 25843890
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A local linearized least squares algorithm for training feedforward neural networks.
    Stan O; Kamen E
    IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777
    [TBL] [Abstract][Full Text] [Related]  

  • 15. An Investigation into the Improvement of Local Minima of the Hopfield Network.
    Armitage AF; Gupta NK; Peng M
    Neural Netw; 1996 Oct; 9(7):1241-1253. PubMed ID: 12662596
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Extended least squares based algorithm for training feedforward networks.
    Yam JF; Chow TS
    IEEE Trans Neural Netw; 1997; 8(3):806-10. PubMed ID: 18255683
    [TBL] [Abstract][Full Text] [Related]  

  • 17. An annealed chaotic maximum neural network for bipartite subgraph problem.
    Wang J; Tang Z; Wang R
    Int J Neural Syst; 2004 Apr; 14(2):107-16. PubMed ID: 15112368
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Improving the convergence of the backpropagation algorithm using learning rate adaptation methods.
    Magoulas GD; Vrahatis MN; Androulakis GS
    Neural Comput; 1999 Oct; 11(7):1769-96. PubMed ID: 10490946
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A machine learning method for generation of a neural network architecture: a continuous ID3 algorithm.
    Cios KJ; Liu N
    IEEE Trans Neural Netw; 1992; 3(2):280-91. PubMed ID: 18276429
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Deterministic global optimization for FNN training.
    Toh KA
    IEEE Trans Syst Man Cybern B Cybern; 2003; 33(6):977-83. PubMed ID: 18238248
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.