These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

326 related articles for article (PubMed ID: 35367735)

  • 1. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks.
    Triche A; Maida AS; Kumar A
    Neural Netw; 2022 Jul; 151():16-33. PubMed ID: 35367735
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A reinforcement learning framework for spiking networks with dynamic synapses.
    El-Laithy K; Bogdan M
    Comput Intell Neurosci; 2011; 2011():869348. PubMed ID: 22046180
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture.
    Ladosz P; Ben-Iwhiwhu E; Dick J; Ketz N; Kolouri S; Krichmar JL; Pilly PK; Soltoggio A
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2045-2056. PubMed ID: 34559664
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Combining STDP and binary networks for reinforcement learning from images and sparse rewards.
    Chevtchenko SF; Ludermir TB
    Neural Netw; 2021 Dec; 144():496-506. PubMed ID: 34601362
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Neuro-Inspired Reinforcement Learning to Improve Trajectory Prediction in Reward-Guided Behavior.
    Chen BW; Yang SH; Kuo CH; Chen JW; Lo YC; Kuo YT; Lin YC; Chang HC; Lin SH; Yu X; Qu B; Ro SV; Lai HY; Chen YY
    Int J Neural Syst; 2022 Sep; 32(9):2250038. PubMed ID: 35989578
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Classic Hebbian learning endows feed-forward networks with sufficient adaptability in challenging reinforcement learning tasks.
    Burns TF
    J Neurophysiol; 2021 Jun; 125(6):2034-2037. PubMed ID: 33909499
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Asymmetric and adaptive reward coding via normalized reinforcement learning.
    Louie K
    PLoS Comput Biol; 2022 Jul; 18(7):e1010350. PubMed ID: 35862443
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.
    Yuan M; Wu X; Yan R; Tang H
    Neural Comput; 2019 Dec; 31(12):2368-2389. PubMed ID: 31614099
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Nutrient-Sensitive Reinforcement Learning in Monkeys.
    Huang FY; Grabenhorst F
    J Neurosci; 2023 Mar; 43(10):1714-1730. PubMed ID: 36669886
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Memory-Dependent Computation and Learning in Spiking Neural Networks Through Hebbian Plasticity.
    Limbacher T; Ozdenizci O; Legenstein R
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38113154
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
    Florian RV
    Neural Comput; 2007 Jun; 19(6):1468-502. PubMed ID: 17444757
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Harnessing the flexibility of neural networks to predict dynamic theoretical parameters underlying human choice behavior.
    Ger Y; Nachmani E; Wolf L; Shahar N
    PLoS Comput Biol; 2024 Jan; 20(1):e1011678. PubMed ID: 38175848
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Modeling Bellman-error with logistic distribution with applications in reinforcement learning.
    Lv O; Zhou B; Yang LF
    Neural Netw; 2024 Sep; 177():106387. PubMed ID: 38788292
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Integrating unsupervised and reinforcement learning in human categorical perception: A computational model.
    Granato G; Cartoni E; Da Rold F; Mattera A; Baldassarre G
    PLoS One; 2022; 17(5):e0267838. PubMed ID: 35536843
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Spontaneous eye blink rate predicts individual differences in exploration and exploitation during reinforcement learning.
    Van Slooten JC; Jahfari S; Theeuwes J
    Sci Rep; 2019 Nov; 9(1):17436. PubMed ID: 31758031
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Spiking neural networks with different reinforcement learning (RL) schemes in a multiagent setting.
    Christodoulou C; Cleanthous A
    Chin J Physiol; 2010 Dec; 53(6):447-53. PubMed ID: 21793357
    [TBL] [Abstract][Full Text] [Related]  

  • 17. On the normative advantages of dopamine and striatal opponency for learning and choice.
    Jaskir A; Frank MJ
    Elife; 2023 Mar; 12():. PubMed ID: 36946371
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Reconciling the STDP and BCM models of synaptic plasticity in a spiking recurrent neural network.
    Bush D; Philippides A; Husbands P; O'Shea M
    Neural Comput; 2010 Aug; 22(8):2059-85. PubMed ID: 20438333
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Neural circuits for learning context-dependent associations of stimuli.
    Zhu H; Paschalidis IC; Hasselmo ME
    Neural Netw; 2018 Nov; 107():48-60. PubMed ID: 30177226
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A differential Hebbian framework for biologically-plausible motor control.
    Verduzco-Flores S; Dorrell W; De Schutter E
    Neural Netw; 2022 Jun; 150():237-258. PubMed ID: 35325677
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 17.