These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Journal Abstract Search


224 related items for PubMed ID: 14622875

  • 1. The general inefficiency of batch training for gradient descent learning.
    Wilson DR, Martinez TR.
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [Abstract] [Full Text] [Related]

  • 2.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 3.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 4. Polynomial harmonic GMDH learning networks for time series modeling.
    Nikolaev NY, Iba H.
    Neural Netw; 2003 Dec; 16(10):1527-40. PubMed ID: 14622880
    [Abstract] [Full Text] [Related]

  • 5.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 6. Reinforcement learning in continuous time and space: interference and not ill conditioning is the main problem when using distributed function approximators.
    Baddeley B.
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):950-6. PubMed ID: 18632383
    [Abstract] [Full Text] [Related]

  • 7.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 8.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 9. Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory.
    Verzi SJ, Heileman GL, Georgiopoulos M.
    Neural Netw; 2006 May; 19(4):446-68. PubMed ID: 16343845
    [Abstract] [Full Text] [Related]

  • 10.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 11.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 12.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 13. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L, Yang Y, Min R, Chakradhar S.
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [Abstract] [Full Text] [Related]

  • 14. A linear recurrent kernel online learning algorithm with sparse updates.
    Fan H, Song Q.
    Neural Netw; 2014 Feb; 50():142-53. PubMed ID: 24300551
    [Abstract] [Full Text] [Related]

  • 15. A fast algorithm for learning a ranking function from large-scale data sets.
    Raykar VC, Duraiswami R, Krishnapuram B.
    IEEE Trans Pattern Anal Mach Intell; 2008 Jul; 30(7):1158-70. PubMed ID: 18550900
    [Abstract] [Full Text] [Related]

  • 16. Optimization and applications of echo state networks with leaky-integrator neurons.
    Jaeger H, Lukosevicius M, Popovici D, Siewert U.
    Neural Netw; 2007 Apr; 20(3):335-52. PubMed ID: 17517495
    [Abstract] [Full Text] [Related]

  • 17.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 18.
    ; . PubMed ID:
    [No Abstract] [Full Text] [Related]

  • 19. Distributed computing methodology for training neural networks in an image-guided diagnostic application.
    Plagianakos VP, Magoulas GD, Vrahatis MN.
    Comput Methods Programs Biomed; 2006 Mar; 81(3):228-35. PubMed ID: 16476503
    [Abstract] [Full Text] [Related]

  • 20. Evolving efficient learning algorithms for binary mappings.
    Bullinaria JA.
    Neural Netw; 2003 Mar; 16(5-6):793-800. PubMed ID: 12850036
    [Abstract] [Full Text] [Related]


    Page: [Next] [New Search]
    of 12.