BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

162 related articles for article (PubMed ID: 31690723)

  • 1. The Eighty Five Percent Rule for optimal learning.
    Wilson RC; Shenhav A; Straccia M; Cohen JD
    Nat Commun; 2019 Nov; 10(1):4646. PubMed ID: 31690723
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Universality of gradient descent neural network training.
    Welper G
    Neural Netw; 2022 Jun; 150():259-273. PubMed ID: 35334438
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.
    Hao Y; Huang X; Dong M; Xu B
    Neural Netw; 2020 Jan; 121():387-395. PubMed ID: 31593843
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Biologically plausible deep learning - But how far can we go with shallow networks?
    Illing B; Gerstner W; Brea J
    Neural Netw; 2019 Oct; 118():90-101. PubMed ID: 31254771
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes.
    Christensen S; Kallsen J
    Neural Comput; 2024 Jun; 36(7):1424-1432. PubMed ID: 38669690
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Local online learning in recurrent networks with random feedback.
    Murray JM
    Elife; 2019 May; 8():. PubMed ID: 31124785
    [TBL] [Abstract][Full Text] [Related]  

  • 8. The general inefficiency of batch training for gradient descent learning.
    Wilson DR; Martinez TR
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 10. One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks.
    Beer C; Barak O
    Neural Comput; 2019 Oct; 31(10):1985-2003. PubMed ID: 31393826
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
    Shaw NP; Jackson T; Orchard J
    PLoS One; 2020; 15(9):e0238454. PubMed ID: 32966302
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Supervised learning in spiking neural networks: A review of algorithms and evaluations.
    Wang X; Lin X; Dang X
    Neural Netw; 2020 May; 125():258-280. PubMed ID: 32146356
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Evolving efficient learning algorithms for binary mappings.
    Bullinaria JA
    Neural Netw; 2003; 16(5-6):793-800. PubMed ID: 12850036
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A critique of pure learning and what artificial neural networks can learn from animal brains.
    Zador AM
    Nat Commun; 2019 Aug; 10(1):3770. PubMed ID: 31434893
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Radiogenomics of lower-grade gliomas: machine learning-based MRI texture analysis for predicting 1p/19q codeletion status.
    Kocak B; Durmaz ES; Ates E; Sel I; Turgut Gunes S; Kaya OK; Zeynalova A; Kilickesmez O
    Eur Radiol; 2020 Feb; 30(2):877-886. PubMed ID: 31691122
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Network Dynamics Governed by Lyapunov Functions: From Memory to Classification.
    Stern M; Shea-Brown E
    Trends Neurosci; 2020 Jul; 43(7):453-455. PubMed ID: 32386741
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.
    Xu Y; Zeng X; Han L; Yang J
    Neural Netw; 2013 Jul; 43():99-113. PubMed ID: 23500504
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A sEMG Classification Framework with Less Training Data.
    Kaneishi D; Matthew RP; Tomizuka M
    Annu Int Conf IEEE Eng Med Biol Soc; 2018 Jul; 2018():1680-1684. PubMed ID: 30440718
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A review of learning in biologically plausible spiking neural networks.
    Taherkhani A; Belatreche A; Li Y; Cosma G; Maguire LP; McGinnity TM
    Neural Netw; 2020 Feb; 122():253-272. PubMed ID: 31726331
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.