These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
117 related articles for article (PubMed ID: 18252631)
1. Relative loss bounds for single neurons. Helmbold DP; Kivinen J; Warmuth MK IEEE Trans Neural Netw; 1999; 10(6):1291-304. PubMed ID: 18252631 [TBL] [Abstract][Full Text] [Related]
2. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. Cesa-Bianchi N; Long PM; Warmuth MK IEEE Trans Neural Netw; 1996; 7(3):604-19. PubMed ID: 18263458 [TBL] [Abstract][Full Text] [Related]
3. A unified approach to universal prediction: generalized upper and lower bounds. Vanli ND; Kozat SS IEEE Trans Neural Netw Learn Syst; 2015 Mar; 26(3):646-51. PubMed ID: 25720015 [TBL] [Abstract][Full Text] [Related]
4. Using random weights to train multilayer networks of hard-limiting units. Barlett PL; Downs T IEEE Trans Neural Netw; 1992; 3(2):202-10. PubMed ID: 18276421 [TBL] [Abstract][Full Text] [Related]
5. Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks. Bartlett PL; Helmbold DP; Long PM Neural Comput; 2019 Mar; 31(3):477-502. PubMed ID: 30645179 [TBL] [Abstract][Full Text] [Related]
6. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm. Treadgold NK; Gedeon TD IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489 [TBL] [Abstract][Full Text] [Related]
7. Online Learning Algorithm Based on Adaptive Control Theory. Liu JW; Zhou JJ; Kamel MS; Luo XL IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2278-2293. PubMed ID: 28436895 [TBL] [Abstract][Full Text] [Related]
8. Training a Two-Layer ReLU Network Analytically. Barbu A Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112413 [TBL] [Abstract][Full Text] [Related]
9. Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers. Paquin AL; Chaib-Draa B; Giguère P Neural Netw; 2023 Jul; 164():382-394. PubMed ID: 37167751 [TBL] [Abstract][Full Text] [Related]
10. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks. Liu B; Liu Z; Zhang T; Yuan T Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244 [TBL] [Abstract][Full Text] [Related]
11. Dynamics in Deep Classifiers Trained with the Square Loss: Normalization, Low Rank, Neural Collapse, and Generalization Bounds. Xu M; Rangamani A; Liao Q; Galanti T; Poggio T Research (Wash D C); 2023; 6():0024. PubMed ID: 37223467 [TBL] [Abstract][Full Text] [Related]
12. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Auer P; Burgsteiner H; Maass W Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524 [TBL] [Abstract][Full Text] [Related]
13. Novel maximum-margin training algorithms for supervised neural networks. Ludwig O; Nunes U IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990 [TBL] [Abstract][Full Text] [Related]
14. A topological description of loss surfaces based on Betti Numbers. Bucarelli MS; D'Inverno GA; Bianchini M; Scarselli F; Silvestri F Neural Netw; 2024 Oct; 178():106465. PubMed ID: 38943863 [TBL] [Abstract][Full Text] [Related]
15. Magnitude and angle dynamics in training single ReLU neurons. Lee S; Sim B; Ye JC Neural Netw; 2024 Oct; 178():106435. PubMed ID: 38970945 [TBL] [Abstract][Full Text] [Related]
16. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Kearns M; Ron D Neural Comput; 1999 Aug; 11(6):1427-53. PubMed ID: 10423502 [TBL] [Abstract][Full Text] [Related]
17. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. Song Q; Wu Y; Soh YC IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640 [TBL] [Abstract][Full Text] [Related]
18. A local linearized least squares algorithm for training feedforward neural networks. Stan O; Kamen E IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777 [TBL] [Abstract][Full Text] [Related]
19. Hebbian Descent: A Unified View on Log-Likelihood Learning. Melchior J; Schiewer R; Wiskott L Neural Comput; 2024 Aug; 36(9):1669-1712. PubMed ID: 39163553 [TBL] [Abstract][Full Text] [Related]
20. Temporal Evolution of Generalization during Learning in Linear Networks. Baldi P; Chauvin Y Neural Comput; 1991; 3(4):589-603. PubMed ID: 31167336 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]