These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
23. Training artificial neural networks directly on the concordance index for censored data using genetic algorithms. Kalderstam J; Edén P; Bendahl PO; Strand C; Fernö M; Ohlsson M Artif Intell Med; 2013 Jun; 58(2):125-32. PubMed ID: 23582884 [TBL] [Abstract][Full Text] [Related]
24. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation. Vuković N; Miljković Z Neural Netw; 2013 Oct; 46():210-26. PubMed ID: 23811384 [TBL] [Abstract][Full Text] [Related]
25. A novel type of activation function in artificial neural networks: Trained activation function. Ertuğrul ÖF Neural Netw; 2018 Mar; 99():148-157. PubMed ID: 29427841 [TBL] [Abstract][Full Text] [Related]
26. New learning automata based algorithms for adaptation of backpropagation algorithm parameters. Meybodi MR; Beigy H Int J Neural Syst; 2002 Feb; 12(1):45-67. PubMed ID: 11852444 [TBL] [Abstract][Full Text] [Related]
28. An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks. Huynh HT; Won Y; Kim JJ Int J Neural Syst; 2008 Oct; 18(5):433-41. PubMed ID: 18991365 [TBL] [Abstract][Full Text] [Related]
29. An SOM-based algorithm for optimization with dynamic weight updating. Chen YY; Young KY Int J Neural Syst; 2007 Jun; 17(3):171-81. PubMed ID: 17640098 [TBL] [Abstract][Full Text] [Related]
30. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks. Gong M; Liu J; Li H; Cai Q; Su L IEEE Trans Neural Netw Learn Syst; 2015 Dec; 26(12):3263-77. PubMed ID: 26340790 [TBL] [Abstract][Full Text] [Related]
31. Ordinal neural networks without iterative tuning. Fernández-Navarro F; Riccardi A; Carloni S IEEE Trans Neural Netw Learn Syst; 2014 Nov; 25(11):2075-85. PubMed ID: 25330430 [TBL] [Abstract][Full Text] [Related]
33. Learning beyond finite memory in recurrent networks of spiking neurons. Tino P; Mills AJ Neural Comput; 2006 Mar; 18(3):591-613. PubMed ID: 16483409 [TBL] [Abstract][Full Text] [Related]
34. A neural network algorithm for semi-supervised node label learning from unbalanced data. Frasca M; Bertoni A; Re M; Valentini G Neural Netw; 2013 Jul; 43():84-98. PubMed ID: 23500503 [TBL] [Abstract][Full Text] [Related]
35. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks. Alvarellos-González A; Pazos A; Porto-Pazos AB Comput Math Methods Med; 2012; 2012():476324. PubMed ID: 22649480 [TBL] [Abstract][Full Text] [Related]
36. On efficient learning machine with root-power mean neuron in complex domain. Tripathi BK; Kalra PK IEEE Trans Neural Netw; 2011 May; 22(5):727-38. PubMed ID: 21447449 [TBL] [Abstract][Full Text] [Related]
37. Rough sets and genetic algorithms in learning cellular neural networks cloning template for decision making system. Radwan E; Tazaki E Int J Neural Syst; 2004 Feb; 14(1):57-68. PubMed ID: 15034947 [TBL] [Abstract][Full Text] [Related]
38. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm. Jankovic M; Ogawa H Int J Neural Syst; 2004 Oct; 14(5):313-23. PubMed ID: 15593379 [TBL] [Abstract][Full Text] [Related]
39. Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Ponulak F; Kasiński A Neural Comput; 2010 Feb; 22(2):467-510. PubMed ID: 19842989 [TBL] [Abstract][Full Text] [Related]
40. Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review. Kaveh M; Mesgari MS Neural Process Lett; 2022 Oct; ():1-104. PubMed ID: 36339645 [TBL] [Abstract][Full Text] [Related] [Previous] [Next] [New Search]