These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
143 related articles for article (PubMed ID: 28358692)
1. Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms. Nitta T; Kuroe Y IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1689-1702. PubMed ID: 28358692 [TBL] [Abstract][Full Text] [Related]
2. Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application. Zhang B; Liu Y; Cao J; Wu S; Wang J Neural Netw; 2019 Jul; 115():50-64. PubMed ID: 30974301 [TBL] [Abstract][Full Text] [Related]
3. Convergence analysis of an augmented algorithm for fully complex-valued neural networks. Xu D; Zhang H; Mandic DP Neural Netw; 2015 Sep; 69():44-50. PubMed ID: 26057612 [TBL] [Abstract][Full Text] [Related]
4. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Wang J; Wen Y; Gou Y; Ye Z; Chen H Neural Netw; 2017 May; 89():19-30. PubMed ID: 28278430 [TBL] [Abstract][Full Text] [Related]
5. Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus. Zhang H; Liu X; Xu D; Zhang Y Cogn Neurodyn; 2014 Jun; 8(3):261-6. PubMed ID: 24808934 [TBL] [Abstract][Full Text] [Related]
6. A fully complex-valued radial basis function network and its learning algorithm. Savitha R; Suresh S; Sundararajan N Int J Neural Syst; 2009 Aug; 19(4):253-67. PubMed ID: 19731399 [TBL] [Abstract][Full Text] [Related]
7. Magnified gradient function with deterministic weight modification in adaptive learning. Ng SC; Cheung CC; Leung SH IEEE Trans Neural Netw; 2004 Nov; 15(6):1411-23. PubMed ID: 15565769 [TBL] [Abstract][Full Text] [Related]
8. A theory of local learning, the learning channel, and the optimality of backpropagation. Baldi P; Sadowski P Neural Netw; 2016 Nov; 83():51-74. PubMed ID: 27584574 [TBL] [Abstract][Full Text] [Related]
9. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Xu D; Zhang H; Liu L Neural Comput; 2010 Oct; 22(10):2655-77. PubMed ID: 20608871 [TBL] [Abstract][Full Text] [Related]
10. Stability analysis of a three-term backpropagation algorithm. Zweiri YH; Seneviratne LD; Althoefer K Neural Netw; 2005 Dec; 18(10):1341-7. PubMed ID: 16135404 [TBL] [Abstract][Full Text] [Related]
11. An Extension of the Back-Propagation Algorithm to Complex Numbers. Nitta T Neural Netw; 1997 Nov; 10(8):1391-1415. PubMed ID: 12662482 [TBL] [Abstract][Full Text] [Related]
12. Gradient descent learning algorithm overview: a general dynamical systems perspective. Baldi P IEEE Trans Neural Netw; 1995; 6(1):182-95. PubMed ID: 18263297 [TBL] [Abstract][Full Text] [Related]
13. Algorithms for accelerated convergence of adaptive PCA. Chatterjee C; Kang Z; Roychowdhury VP IEEE Trans Neural Netw; 2000; 11(2):338-55. PubMed ID: 18249765 [TBL] [Abstract][Full Text] [Related]
17. Hinfinity-learning of layered neural networks. Nishiyama K; Suzuki K IEEE Trans Neural Netw; 2001; 12(6):1265-77. PubMed ID: 18249956 [TBL] [Abstract][Full Text] [Related]
18. Convergence analysis of online gradient method for BP neural networks. Wu W; Wang J; Cheng M; Li Z Neural Netw; 2011 Jan; 24(1):91-8. PubMed ID: 20870390 [TBL] [Abstract][Full Text] [Related]
19. Adaptive complex-valued stepsize based fast learning of complex-valued neural networks. Zhang Y; Huang H Neural Netw; 2020 Apr; 124():233-242. PubMed ID: 32018161 [TBL] [Abstract][Full Text] [Related]
20. Efficient learning algorithms for three-layer regular feedforward fuzzy neural networks. Liu P; Li H IEEE Trans Neural Netw; 2004 May; 15(3):545-58. PubMed ID: 15384545 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]