These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

133 related articles for article (PubMed ID: 18267864)

  • 1. A parallel genetic/neural network learning algorithm for MIMD shared memory machines.
    Hung SL; Adeli H
    IEEE Trans Neural Netw; 1994; 5(6):900-9. PubMed ID: 18267864
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Performance of the Alex AVX-2 MIMD architecture in learning the NetTalk database.
    Abbas HM
    IEEE Trans Neural Netw; 2004 Mar; 15(2):505-14. PubMed ID: 15384542
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm.
    Tsai JT; Chou JH; Liu TK
    IEEE Trans Neural Netw; 2006 Jan; 17(1):69-80. PubMed ID: 16526477
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.
    Lin WQ; Jiang JH; Zhou YP; Wu HL; Shen GL; Yu RQ
    J Comput Chem; 2007 Jan; 28(2):519-27. PubMed ID: 17186488
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Parameter incremental learning algorithm for neural networks.
    Wan S; Banta LE
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1424-38. PubMed ID: 17131658
    [TBL] [Abstract][Full Text] [Related]  

  • 6. On adaptive learning rate that guarantees convergence in feedforward networks.
    Behera L; Kumar S; Patnaik A
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1116-25. PubMed ID: 17001974
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A hybrid linear/nonlinear training algorithm for feedforward neural networks.
    McLoone S; Brown MD; Irwin G; Lightbody A
    IEEE Trans Neural Netw; 1998; 9(4):669-84. PubMed ID: 18252490
    [TBL] [Abstract][Full Text] [Related]  

  • 8. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A linear recurrent kernel online learning algorithm with sparse updates.
    Fan H; Song Q
    Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551
    [TBL] [Abstract][Full Text] [Related]  

  • 10. On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks.
    Luo ZQ
    Neural Comput; 1991; 3(2):226-245. PubMed ID: 31167300
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Performance analysis of a pipelined backpropagation parallel algorithm.
    Petrowski A; Dreyfus G; Girault C
    IEEE Trans Neural Netw; 1993; 4(6):970-81. PubMed ID: 18276527
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A selective learning method to improve the generalization of multilayer feedforward neural networks.
    Galván IM; Isasi P; Aler R; Valls JM
    Int J Neural Syst; 2001 Apr; 11(2):167-77. PubMed ID: 14632169
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Training two-layered feedforward networks with variable projection method.
    Kim CT; Lee JJ
    IEEE Trans Neural Netw; 2008 Feb; 19(2):371-5. PubMed ID: 18269969
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The massively parallel genetic algorithm for RNA folding: MIMD implementation and population variation.
    Shapiro BA; Wu JC; Bengali D; Potts MJ
    Bioinformatics; 2001 Feb; 17(2):137-48. PubMed ID: 11238069
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Advanced neural-network training algorithm with reduced complexity based on Jacobian deficiency.
    Zhou G; Si J
    IEEE Trans Neural Netw; 1998; 9(3):448-53. PubMed ID: 18252468
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Training feedforward networks with the Marquardt algorithm.
    Hagan MT; Menhaj MB
    IEEE Trans Neural Netw; 1994; 5(6):989-93. PubMed ID: 18267874
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Hierarchical genetic algorithm for near optimal feedforward neural network design.
    Yen G; Lu H
    Int J Neural Syst; 2002 Feb; 12(1):31-43. PubMed ID: 11852443
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A formal selection and pruning algorithm for feedforward artificial neural network optimization.
    Ponnapalli PS; Ho KC; Thomson M
    IEEE Trans Neural Netw; 1999; 10(4):964-8. PubMed ID: 18252597
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Efficient learning algorithms for three-layer regular feedforward fuzzy neural networks.
    Liu P; Li H
    IEEE Trans Neural Netw; 2004 May; 15(3):545-58. PubMed ID: 15384545
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.