These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

269 related articles for article (PubMed ID: 11771721)

  • 1. Upper bound of the expected training error of neural network regression for a Gaussian noise sequence.
    Hagiwara K; Hayasaka T; Toda N; Usui S; Kuno K
    Neural Netw; 2001 Dec; 14(10):1419-29. PubMed ID: 11771721
    [TBL] [Abstract][Full Text] [Related]  

  • 2. On the problem in model selection of neural network regression in overrealizable scenario.
    Hagiwara K
    Neural Comput; 2002 Aug; 14(8):1979-2002. PubMed ID: 12180410
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Relation between weight size and degree of over-fitting in neural network regression.
    Hagiwara K; Fukumizu K
    Neural Netw; 2008 Jan; 21(1):48-58. PubMed ID: 18206348
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Noise-enhanced convolutional neural networks.
    Audhkhasi K; Osoba O; Kosko B
    Neural Netw; 2016 Jun; 78():15-23. PubMed ID: 26700535
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Localized generalization error model and its application to architecture selection for radial basis function neural network.
    Yeung DS; Ng WW; Wang D; Tsang EC; Wang XZ
    IEEE Trans Neural Netw; 2007 Sep; 18(5):1294-305. PubMed ID: 18220181
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Automatic basis selection techniques for RBF networks.
    Ghodsi A; Schuurmans D
    Neural Netw; 2003; 16(5-6):809-16. PubMed ID: 12850038
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Empirical error-confidence curves for neural network and Gaussian classifiers.
    Wolff GJ; Stork DG; Owen A
    Int J Neural Syst; 1996 Jul; 7(3):263-71. PubMed ID: 8891842
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Analysis on the inherent noise tolerance of feedforward network and one noise-resilient structure.
    Lu W; Zhang Z; Qin F; Zhang W; Lu Y; Liu Y; Zheng Y
    Neural Netw; 2023 Aug; 165():786-798. PubMed ID: 37418861
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.
    Jin P; Lu L; Tang Y; Karniadakis GE
    Neural Netw; 2020 Oct; 130():85-99. PubMed ID: 32650153
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Learning efficiency of redundant neural networks in Bayesian estimation.
    Watanabe S
    IEEE Trans Neural Netw; 2001; 12(6):1475-86. PubMed ID: 18249976
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Learning and generalization in radial basis function networks.
    Freeman JA; Saad D
    Neural Comput; 1995 Sep; 7(5):1000-20. PubMed ID: 7584888
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Analysis of augmented-input-layer RBFNN.
    Uykan Z; Koivo HN
    IEEE Trans Neural Netw; 2005 Mar; 16(2):364-9. PubMed ID: 15787143
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Trading variance reduction with unbiasedness: the regularized subspace information criterion for robust model selection in kernel regression.
    Sugiyama M; Kawanabe M; Müller KR
    Neural Comput; 2004 May; 16(5):1077-104. PubMed ID: 15070511
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.
    Zhang J; Liu T; Tao D
    IEEE Trans Neural Netw Learn Syst; 2024 Nov; 35(11):16683-16695. PubMed ID: 37585328
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets.
    Xiong YS; Saad D
    Phys Rev E Stat Nonlin Soft Matter Phys; 2001 Jul; 64(1 Pt 1):011919. PubMed ID: 11461300
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Network information criterion-determining the number of hidden units for an artificial neural network model.
    Murata N; Yoshizawa S; Amari S
    IEEE Trans Neural Netw; 1994; 5(6):865-72. PubMed ID: 18267861
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Artificial neural networks (ANNs) and partial least squares (PLS) regression in the quantitative analysis of cocrystal formulations by Raman and ATR-FTIR spectroscopy.
    Barmpalexis P; Karagianni A; Nikolakakis I; Kachrimanis K
    J Pharm Biomed Anal; 2018 Sep; 158():214-224. PubMed ID: 29886369
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Impulse Data Models for the Inverse Problem of Electrocardiography.
    Peng T; Malik A; Bear LR; Trew ML
    IEEE J Biomed Health Inform; 2022 Mar; 26(3):1353-1361. PubMed ID: 34428164
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Parameter convergence and learning curves for neural networks.
    Fine TL; Mukherjee S
    Neural Comput; 1999 Apr; 11(3):747-70. PubMed ID: 10085428
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 14.