These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

219 related articles for article (PubMed ID: 30006364)

  • 1. The Successor Representation: Its Computational Logic and Neural Substrates.
    Gershman SJ
    J Neurosci; 2018 Aug; 38(33):7193-7200. PubMed ID: 30006364
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Neural learning rules for generating flexible predictions and computing the successor representation.
    Fang C; Aronov D; Abbott LF; Mackevicius EL
    Elife; 2023 Mar; 12():. PubMed ID: 36928104
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Rapid learning of predictive maps with STDP and theta phase precession.
    George TM; de Cothi W; Stachenfeld KL; Barry C
    Elife; 2023 Mar; 12():. PubMed ID: 36927826
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Predictive representations can link model-based reinforcement learning to model-free mechanisms.
    Russek EM; Momennejad I; Botvinick MM; Gershman SJ; Daw ND
    PLoS Comput Biol; 2017 Sep; 13(9):e1005768. PubMed ID: 28945743
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Learning to represent reward structure: a key to adapting to complex environments.
    Nakahara H; Hikosaka O
    Neurosci Res; 2012 Dec; 74(3-4):177-83. PubMed ID: 23069349
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Reward-predictive representations generalize across tasks in reinforcement learning.
    Lehnert L; Littman ML; Frank MJ
    PLoS Comput Biol; 2020 Oct; 16(10):e1008317. PubMed ID: 33057329
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Multiple memory systems as substrates for multiple decision systems.
    Doll BB; Shohamy D; Daw ND
    Neurobiol Learn Mem; 2015 Jan; 117():4-13. PubMed ID: 24846190
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task.
    Suri RE; Schultz W
    Neuroscience; 1999; 91(3):871-90. PubMed ID: 10391468
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Temporal-difference reinforcement learning with distributed representations.
    Kurth-Nelson Z; Redish AD
    PLoS One; 2009 Oct; 4(10):e7362. PubMed ID: 19841749
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A distributional code for value in dopamine-based reinforcement learning.
    Dabney W; Kurth-Nelson Z; Uchida N; Starkweather CK; Hassabis D; Munos R; Botvinick M
    Nature; 2020 Jan; 577(7792):671-675. PubMed ID: 31942076
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Learning predictive cognitive maps with spiking neurons during behavior and replays.
    Bono J; Zannone S; Pedrosa V; Clopath C
    Elife; 2023 Mar; 12():. PubMed ID: 36927625
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
    Frémaux N; Sprekeler H; Gerstner W
    PLoS Comput Biol; 2013 Apr; 9(4):e1003024. PubMed ID: 23592970
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The ubiquity of model-based reinforcement learning.
    Doll BB; Simon DA; Daw ND
    Curr Opin Neurobiol; 2012 Dec; 22(6):1075-81. PubMed ID: 22959354
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A probabilistic successor representation for context-dependent learning.
    Geerts JP; Gershman SJ; Burgess N; Stachenfeld KL
    Psychol Rev; 2024 Mar; 131(2):578-597. PubMed ID: 37166847
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Dopamine encoding of novelty facilitates efficient uncertainty-driven exploration.
    Wang Y; Lak A; Manohar SG; Bogacz R
    PLoS Comput Biol; 2024 Apr; 20(4):e1011516. PubMed ID: 38626219
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Correlates of reward-predictive value in learning-related hippocampal neural activity.
    Okatan M
    Hippocampus; 2009 May; 19(5):487-506. PubMed ID: 19123250
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A recurrent neural network framework for flexible and adaptive decision making based on sequence learning.
    Zhang Z; Cheng H; Yang T
    PLoS Comput Biol; 2020 Nov; 16(11):e1008342. PubMed ID: 33141824
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Asymmetric and adaptive reward coding via normalized reinforcement learning.
    Louie K
    PLoS Comput Biol; 2022 Jul; 18(7):e1010350. PubMed ID: 35862443
    [TBL] [Abstract][Full Text] [Related]  

  • 19. RatInABox, a toolkit for modelling locomotion and neuronal activity in continuous environments.
    George TM; Rastogi M; de Cothi W; Clopath C; Stachenfeld K; Barry C
    Elife; 2024 Feb; 13():. PubMed ID: 38334473
    [TBL] [Abstract][Full Text] [Related]  

  • 20. The successor representation in human reinforcement learning.
    Momennejad I; Russek EM; Cheong JH; Botvinick MM; Daw ND; Gershman SJ
    Nat Hum Behav; 2017 Sep; 1(9):680-692. PubMed ID: 31024137
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.