These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 38669690)

  • 1. Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes.
    Christensen S; Kallsen J
    Neural Comput; 2024 Jun; 36(7):1424-1432. PubMed ID: 38669690
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Dendritic normalisation improves learning in sparsely connected artificial neural networks.
    Bird AD; Jedlicka P; Cuntz H
    PLoS Comput Biol; 2021 Aug; 17(8):e1009202. PubMed ID: 34370727
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Supervised Learning Algorithm for Multilayer Spiking Neural Networks with Long-Term Memory Spike Response Model.
    Lin X; Zhang M; Wang X
    Comput Intell Neurosci; 2021; 2021():8592824. PubMed ID: 34868299
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Convergent Temperature Representations in Artificial and Biological Neural Networks.
    Haesemeyer M; Schier AF; Engert F
    Neuron; 2019 Sep; 103(6):1123-1134.e6. PubMed ID: 31376984
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Hebbian semi-supervised learning in a sample efficiency setting.
    Lagani G; Falchi F; Gennaro C; Amato G
    Neural Netw; 2021 Nov; 143():719-731. PubMed ID: 34438195
    [TBL] [Abstract][Full Text] [Related]  

  • 6. All-optical spiking neurosynaptic networks with self-learning capabilities.
    Feldmann J; Youngblood N; Wright CD; Bhaskaran H; Pernice WHP
    Nature; 2019 May; 569(7755):208-214. PubMed ID: 31068721
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Decentralized stochastic sharpness-aware minimization algorithm.
    Chen S; Deng X; Xu D; Sun T; Li D
    Neural Netw; 2024 Aug; 176():106325. PubMed ID: 38653126
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Learning curves for stochastic gradient descent in linear feedforward networks.
    Werfel J; Xie X; Seung HS
    Neural Comput; 2005 Dec; 17(12):2699-718. PubMed ID: 16212768
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
    Bagley BA; Bordelon B; Moseley B; Wessel R
    PLoS One; 2020; 15(2):e0229083. PubMed ID: 32092107
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Optimization and applications of echo state networks with leaky-integrator neurons.
    Jaeger H; Lukosevicius M; Popovici D; Siewert U
    Neural Netw; 2007 Apr; 20(3):335-52. PubMed ID: 17517495
    [TBL] [Abstract][Full Text] [Related]  

  • 12. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Block-cyclic stochastic coordinate descent for deep neural networks.
    Nakamura K; Soatto S; Hong BW
    Neural Netw; 2021 Jul; 139():348-357. PubMed ID: 33887584
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Iterative free-energy optimization for recurrent neural networks (INFERNO).
    Pitti A; Gaussier P; Quoy M
    PLoS One; 2017; 12(3):e0173684. PubMed ID: 28282439
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A solution to the learning dilemma for recurrent networks of spiking neurons.
    Bellec G; Scherr F; Subramoney A; Hajek E; Salaj D; Legenstein R; Maass W
    Nat Commun; 2020 Jul; 11(1):3625. PubMed ID: 32681001
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Supervised learning in spiking neural networks: A review of algorithms and evaluations.
    Wang X; Lin X; Dang X
    Neural Netw; 2020 May; 125():258-280. PubMed ID: 32146356
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 18. ASD+M: Automatic parameter tuning in stochastic optimization and on-line learning.
    WawrzyƄski P
    Neural Netw; 2017 Dec; 96():1-10. PubMed ID: 28950104
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Computational Principles of Supervised Learning in the Cerebellum.
    Raymond JL; Medina JF
    Annu Rev Neurosci; 2018 Jul; 41():233-253. PubMed ID: 29986160
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.
    Xu Y; Zeng X; Han L; Yang J
    Neural Netw; 2013 Jul; 43():99-113. PubMed ID: 23500504
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.