These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

143 related articles for article (PubMed ID: 26958442)

  • 1. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm.
    Treadgold NK; Gedeon TD
    IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A medical diagnostic tool based on radial basis function classifiers and evolutionary simulated annealing.
    Alexandridis A; Chondrodima E
    J Biomed Inform; 2014 Jun; 49():61-72. PubMed ID: 24662274
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Hardware prototypes of a Boolean neural network and the simulated annealing optimization method.
    Niittylahti J
    Int J Neural Syst; 1996 Mar; 7(1):45-52. PubMed ID: 8828049
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Efficient calculation of the Gauss-Newton approximation of the Hessian matrix in neural networks.
    Fairbank M; Alonso E
    Neural Comput; 2012 Mar; 24(3):607-10. PubMed ID: 22168563
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Leap-frog is a robust algorithm for training neural networks.
    Holm JE; Botha EC
    Network; 1999 Feb; 10(1):1-13. PubMed ID: 10372759
    [TBL] [Abstract][Full Text] [Related]  

  • 7. BP Neural Network Based on Simulated Annealing Algorithm Optimization for Financial Crisis Dynamic Early Warning Model.
    Chen Y
    Comput Intell Neurosci; 2021; 2021():4034903. PubMed ID: 34659390
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Fast and efficient second-order method for training radial basis function networks.
    Xie T; Yu H; Hewlett J; Rózycki P; Wilamowski B
    IEEE Trans Neural Netw Learn Syst; 2012 Apr; 23(4):609-19. PubMed ID: 24805044
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A machine learning method for generation of a neural network architecture: a continuous ID3 algorithm.
    Cios KJ; Liu N
    IEEE Trans Neural Netw; 1992; 3(2):280-91. PubMed ID: 18276429
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A parallel genetic/neural network learning algorithm for MIMD shared memory machines.
    Hung SL; Adeli H
    IEEE Trans Neural Netw; 1994; 5(6):900-9. PubMed ID: 18267864
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Training feedforward networks with the Marquardt algorithm.
    Hagan MT; Menhaj MB
    IEEE Trans Neural Netw; 1994; 5(6):989-93. PubMed ID: 18267874
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Three Methods to Speed up the Training of Feedforward and Feedback Perceptrons.
    Agarwal M; Stäger F
    Neural Netw; 1997 Nov; 10(8):1435-1443. PubMed ID: 12662484
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Neural networks convergence using physicochemical data.
    Karelson M; Dobchev DA; Kulshyn OV; Katritzky AR
    J Chem Inf Model; 2006; 46(5):1891-7. PubMed ID: 16995718
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Paralleled hardware annealing for optimal solutions on electronic neural networks.
    Lee BW; Sheu BJ
    IEEE Trans Neural Netw; 1993; 4(4):588-99. PubMed ID: 18267760
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Efficient Training of Recurrent Neural Network with Time Delays.
    Marom E; Saad D; Cohen B
    Neural Netw; 1997 Jan; 10(1):51-59. PubMed ID: 12662886
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Dynamic tunneling technique for efficient training of multilayer perceptrons.
    RoyChowdhury P; Singh YP; Chansarkar RA
    IEEE Trans Neural Netw; 1999; 10(1):48-55. PubMed ID: 18252502
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Neural Network Structure Optimization by Simulated Annealing.
    Kuo CL; Kuruoglu EE; Chan WKV
    Entropy (Basel); 2022 Feb; 24(3):. PubMed ID: 35327859
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
    Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y
    Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The annealing robust backpropagation (ARBP) learning algorithm.
    Chuang CC; Su SF; Hsiao CC
    IEEE Trans Neural Netw; 2000; 11(5):1067-77. PubMed ID: 18249835
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Robust full Bayesian learning for radial basis networks.
    Andrieu C; de Freitas N; Doucet A
    Neural Comput; 2001 Oct; 13(10):2359-407. PubMed ID: 11571002
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.