These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

161 related articles for article (PubMed ID: 19888596)

  • 1. Learning with incomplete information in the committee machine.
    Bergmann UM; Kühn R; Stamatescu IO
    Biol Cybern; 2009 Dec; 101(5-6):401-10. PubMed ID: 19888596
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Learning with incomplete information and the mathematical structure behind it.
    Kühn R; Stamatescu IO
    Biol Cybern; 2007 Jul; 97(1):99-112. PubMed ID: 17534648
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Subspace information criterion for model selection.
    Sugiyama M; Ogawa H
    Neural Comput; 2001 Aug; 13(8):1863-89. PubMed ID: 11506674
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Nonlinear complex-valued extensions of Hebbian learning: an essay.
    Fiori S
    Neural Comput; 2005 Apr; 17(4):779-838. PubMed ID: 15829090
    [TBL] [Abstract][Full Text] [Related]  

  • 6. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.
    Widrow B; Greenblatt A; Kim Y; Park D
    Neural Netw; 2013 Jan; 37():182-8. PubMed ID: 23140797
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Improving generalization capabilities of dynamic neural networks.
    Galicki M; Leistritz L; Zwick EB; Witte H
    Neural Comput; 2004 Jun; 16(6):1253-82. PubMed ID: 15130249
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets.
    Xiong YS; Saad D
    Phys Rev E Stat Nonlin Soft Matter Phys; 2001 Jul; 64(1 Pt 1):011919. PubMed ID: 11461300
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Dimensional reduction for reward-based learning.
    Swinehart CD; Abbott LF
    Network; 2006 Sep; 17(3):235-52. PubMed ID: 17162613
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A fast and convergent stochastic MLP learning algorithm.
    Sakurai A
    Int J Neural Syst; 2001 Dec; 11(6):573-83. PubMed ID: 11852440
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Where do features come from?
    Hinton G
    Cogn Sci; 2014 Aug; 38(6):1078-101. PubMed ID: 23800216
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Are multi-layer backpropagation networks catastrophically amnesic?
    Yamaguchi M
    Scand J Psychol; 2004 Nov; 45(5):357-61. PubMed ID: 15535804
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory.
    Verzi SJ; Heileman GL; Georgiopoulos M
    Neural Netw; 2006 May; 19(4):446-68. PubMed ID: 16343845
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Is extreme learning machine feasible? A theoretical assessment (part II).
    Lin S; Liu X; Fang J; Xu Z
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):21-34. PubMed ID: 25069128
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison.
    Kolodziejski C; Porr B; Wörgötter F
    Biol Cybern; 2008 Mar; 98(3):259-72. PubMed ID: 18196266
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Computational properties and convergence analysis of BPNN for cyclic and almost cyclic learning with penalty.
    Wang J; Wu W; Zurada JM
    Neural Netw; 2012 Sep; 33():127-35. PubMed ID: 22622263
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Dynamics of learning near singularities in layered networks.
    Wei H; Zhang J; Cousseau F; Ozeki T; Amari S
    Neural Comput; 2008 Mar; 20(3):813-43. PubMed ID: 18045020
    [TBL] [Abstract][Full Text] [Related]  

  • 18. How inhibitory oscillations can train neural networks and punish competitors.
    Norman KA; Newman E; Detre G; Polyn S
    Neural Comput; 2006 Jul; 18(7):1577-610. PubMed ID: 16764515
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Is extreme learning machine feasible? A theoretical assessment (part I).
    Liu X; Lin S; Fang J; Xu Z
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):7-20. PubMed ID: 25069126
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.
    Xu D; Zhang H; Liu L
    Neural Comput; 2010 Oct; 22(10):2655-77. PubMed ID: 20608871
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.