These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

120 related articles for article (PubMed ID: 18252631)

  • 1. Relative loss bounds for single neurons.
    Helmbold DP; Kivinen J; Warmuth MK
    IEEE Trans Neural Netw; 1999; 10(6):1291-304. PubMed ID: 18252631
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Worst-case quadratic loss bounds for prediction using linear functions and gradient descent.
    Cesa-Bianchi N; Long PM; Warmuth MK
    IEEE Trans Neural Netw; 1996; 7(3):604-19. PubMed ID: 18263458
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A unified approach to universal prediction: generalized upper and lower bounds.
    Vanli ND; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2015 Mar; 26(3):646-51. PubMed ID: 25720015
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Using random weights to train multilayer networks of hard-limiting units.
    Barlett PL; Downs T
    IEEE Trans Neural Netw; 1992; 3(2):202-10. PubMed ID: 18276421
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks.
    Bartlett PL; Helmbold DP; Long PM
    Neural Comput; 2019 Mar; 31(3):477-502. PubMed ID: 30645179
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm.
    Treadgold NK; Gedeon TD
    IEEE Trans Neural Netw; 1998; 9(4):662-8. PubMed ID: 18252489
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Online Learning Algorithm Based on Adaptive Control Theory.
    Liu JW; Zhou JJ; Kamel MS; Luo XL
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2278-2293. PubMed ID: 28436895
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Training a Two-Layer ReLU Network Analytically.
    Barbu A
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112413
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers.
    Paquin AL; Chaib-Draa B; Giguère P
    Neural Netw; 2023 Jul; 164():382-394. PubMed ID: 37167751
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Dynamics in Deep Classifiers Trained with the Square Loss: Normalization, Low Rank, Neural Collapse, and Generalization Bounds.
    Xu M; Rangamani A; Liao Q; Galanti T; Poggio T
    Research (Wash D C); 2023; 6():0024. PubMed ID: 37223467
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation.
    Kearns M; Ron D
    Neural Comput; 1999 Aug; 11(6):1427-53. PubMed ID: 10423502
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
    Song Q; Wu Y; Soh YC
    IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A local linearized least squares algorithm for training feedforward neural networks.
    Stan O; Kamen E
    IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Temporal Evolution of Generalization during Learning in Linear Networks.
    Baldi P; Chauvin Y
    Neural Comput; 1991; 3(4):589-603. PubMed ID: 31167336
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Stability-Based Generalization Analysis of Distributed Learning Algorithms for Big Data.
    Wu X; Zhang J; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Mar; 31(3):801-812. PubMed ID: 31071054
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation.
    Huang GB; Saratchandran P; Sundararajan N
    IEEE Trans Neural Netw; 2005 Jan; 16(1):57-67. PubMed ID: 15732389
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Approximate labeling via graph cuts based on linear programming.
    Komodakis N; Tziritas G
    IEEE Trans Pattern Anal Mach Intell; 2007 Aug; 29(8):1436-53. PubMed ID: 17568146
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.