These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

205 related articles for article (PubMed ID: 35465269)

  • 1. Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks.
    Yoder JA; Anderson CB; Wang C; Izquierdo EJ
    Front Comput Neurosci; 2022; 16():818985. PubMed ID: 35465269
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning.
    Anwar H; Caby S; Dura-Bernal S; D'Onofrio D; Hasegan D; Deible M; Grunblatt S; Chadderdon GL; Kerr CC; Lakatos P; Lytton WW; Hazan H; Neymotin SA
    PLoS One; 2022; 17(5):e0265808. PubMed ID: 35544518
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.
    Yaman A; Iacca G; Mocanu DC; Coler M; Fletcher G; Pechenizkiy M
    Evol Comput; 2021 Sep; 29(3):391-414. PubMed ID: 34467993
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Synaptic dynamics: linear model and adaptation algorithm.
    Yousefi A; Dibazar AA; Berger TW
    Neural Netw; 2014 Aug; 56():49-68. PubMed ID: 24867390
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.
    Yuan M; Wu X; Yan R; Tang H
    Neural Comput; 2019 Dec; 31(12):2368-2389. PubMed ID: 31614099
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Exploring the limits of learning: Segregation of information integration and response selection is required for learning a serial reversal task.
    Mininni CJ; Zanutto BS
    PLoS One; 2017; 12(10):e0186959. PubMed ID: 29077735
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A neuro-inspired general framework for the evolution of stochastic dynamical systems: Cellular automata, random Boolean networks and echo state networks towards criticality.
    Pontes-Filho S; Lind P; Yazidi A; Zhang J; Hammer H; Mello GBM; Sandvig I; Tufte G; Nichele S
    Cogn Neurodyn; 2020 Oct; 14(5):657-674. PubMed ID: 33014179
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.
    Miconi T
    Elife; 2017 Feb; 6():. PubMed ID: 28230528
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A neural network model for timing control with reinforcement.
    Wang J; El-Jayyousi Y; Ozden I
    Front Comput Neurosci; 2022; 16():918031. PubMed ID: 36277612
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Evolving interpretable plasticity for spiking networks.
    Jordan J; Schmidt M; Senn W; Petrovici MA
    Elife; 2021 Oct; 10():. PubMed ID: 34709176
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Reward-dependent learning in neuronal networks for planning and decision making.
    Dehaene S; Changeux JP
    Prog Brain Res; 2000; 126():217-29. PubMed ID: 11105649
    [TBL] [Abstract][Full Text] [Related]  

  • 12. [Dynamic paradigm in psychopathology: "chaos theory", from physics to psychiatry].
    Pezard L; Nandrino JL
    Encephale; 2001; 27(3):260-8. PubMed ID: 11488256
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Recurrent neural-network training by a learning automaton approach for trajectory learning and control system design.
    Sudareshan MK; Condarcure TA
    IEEE Trans Neural Netw; 1998; 9(3):354-68. PubMed ID: 18252461
    [TBL] [Abstract][Full Text] [Related]  

  • 14. One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks.
    Beer C; Barak O
    Neural Comput; 2019 Oct; 31(10):1985-2003. PubMed ID: 31393826
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
    Florian RV
    Neural Comput; 2007 Jun; 19(6):1468-502. PubMed ID: 17444757
    [TBL] [Abstract][Full Text] [Related]  

  • 16. An analog VLSI recurrent neural network learning a continuous-time trajectory.
    Cauwenberghs G
    IEEE Trans Neural Netw; 1996; 7(2):346-61. PubMed ID: 18255589
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Modeling of dynamical systems through deep learning.
    Rajendra P; Brahmajirao V
    Biophys Rev; 2020 Nov; 12(6):1311-20. PubMed ID: 33222032
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks.
    Han D; Doya K; Tani J
    Neural Netw; 2020 Sep; 129():149-162. PubMed ID: 32534378
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Augmented Hill-Climb increases reinforcement learning efficiency for language-based de novo molecule generation.
    Thomas M; O'Boyle NM; Bender A; de Graaf C
    J Cheminform; 2022 Oct; 14(1):68. PubMed ID: 36192789
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.
    Kappel D; Legenstein R; Habenschuss S; Hsieh M; Maass W
    eNeuro; 2018; 5(2):. PubMed ID: 29696150
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.