These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

154 related articles for article (PubMed ID: 37066383)

  • 21. Believing in dopamine.
    Gershman SJ; Uchida N
    Nat Rev Neurosci; 2019 Nov; 20(11):703-714. PubMed ID: 31570826
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Human-level control through deep reinforcement learning.
    Mnih V; Kavukcuoglu K; Silver D; Rusu AA; Veness J; Bellemare MG; Graves A; Riedmiller M; Fidjeland AK; Ostrovski G; Petersen S; Beattie C; Sadik A; Antonoglou I; King H; Kumaran D; Wierstra D; Legg S; Hassabis D
    Nature; 2015 Feb; 518(7540):529-33. PubMed ID: 25719670
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Reward prediction errors, not sensory prediction errors, play a major role in model selection in human reinforcement learning.
    Wu Y; Morita M; Izawa J
    Neural Netw; 2022 Oct; 154():109-121. PubMed ID: 35872516
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Noisy recurrent neural networks: the continuous-time case.
    Das S; Olurotimi O
    IEEE Trans Neural Netw; 1998; 9(5):913-36. PubMed ID: 18255776
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces Under Partial Observability.
    Malekzadeh P; Plataniotis KN
    Neural Comput; 2024 Sep; 36(10):2073-2135. PubMed ID: 39177966
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Learning Simpler Language Models with the Differential State Framework.
    Ororbia Ii AG; Mikolov T; Reitter D
    Neural Comput; 2017 Dec; 29(12):3327-3352. PubMed ID: 28957029
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Motivated Optimal Developmental Learning for Sequential Tasks Without Using Rigid Time-Discounts.
    Wang D; Duan Y; Weng J
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4917-4931. PubMed ID: 29994173
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Reward-predictive representations generalize across tasks in reinforcement learning.
    Lehnert L; Littman ML; Frank MJ
    PLoS Comput Biol; 2020 Oct; 16(10):e1008317. PubMed ID: 33057329
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
    Frémaux N; Sprekeler H; Gerstner W
    PLoS Comput Biol; 2013 Apr; 9(4):e1003024. PubMed ID: 23592970
    [TBL] [Abstract][Full Text] [Related]  

  • 30. A probabilistic successor representation for context-dependent learning.
    Geerts JP; Gershman SJ; Burgess N; Stachenfeld KL
    Psychol Rev; 2024 Mar; 131(2):578-597. PubMed ID: 37166847
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Exploratory State Representation Learning.
    Merckling A; Perrin-Gilbert N; Coninx A; Doncieux S
    Front Robot AI; 2022; 9():762051. PubMed ID: 35237669
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Dopamine transients encode reward prediction errors independent of learning rates.
    Mah A; Golden CEM; Constantinople CM
    bioRxiv; 2024 Aug; ():. PubMed ID: 38659861
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Deep convolutional neural network and IoT technology for healthcare.
    Wassan S; Dongyan H; Suhail B; Jhanjhi NZ; Xiao G; Ahmed S; Murugesan RK
    Digit Health; 2024; 10():20552076231220123. PubMed ID: 38250147
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Reward-based training of recurrent neural networks for cognitive and value-based tasks.
    Song HF; Yang GR; Wang XJ
    Elife; 2017 Jan; 6():. PubMed ID: 28084991
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Modeling the effects of environmental and perceptual uncertainty using deterministic reinforcement learning dynamics with partial observability.
    Barfuss W; Mann RP
    Phys Rev E; 2022 Mar; 105(3-1):034409. PubMed ID: 35428165
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Alterations in the amplitude and burst rate of beta oscillations impair reward-dependent motor learning in anxiety.
    Sporn S; Hein T; Herrojo Ruiz M
    Elife; 2020 May; 9():. PubMed ID: 32423530
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Belief state representation in the dopamine system.
    Babayan BM; Uchida N; Gershman SJ
    Nat Commun; 2018 May; 9(1):1891. PubMed ID: 29760401
    [TBL] [Abstract][Full Text] [Related]  

  • 38. On the choice of parameters of the cost function in nested modular RNN's.
    Mandic DP; Chambers JA
    IEEE Trans Neural Netw; 2000; 11(2):315-22. PubMed ID: 18249763
    [TBL] [Abstract][Full Text] [Related]  

  • 39. The role of state uncertainty in the dynamics of dopamine.
    Mikhael JG; Kim HR; Uchida N; Gershman SJ
    Curr Biol; 2022 Mar; 32(5):1077-1087.e9. PubMed ID: 35114098
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Correlates of reward-predictive value in learning-related hippocampal neural activity.
    Okatan M
    Hippocampus; 2009 May; 19(5):487-506. PubMed ID: 19123250
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.