These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

221 related articles for article (PubMed ID: 18624657)

  • 21. Involvement of basal ganglia and orbitofrontal cortex in goal-directed behavior.
    Hollerman JR; Tremblay L; Schultz W
    Prog Brain Res; 2000; 126():193-215. PubMed ID: 11105648
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework.
    Sadacca BF; Jones JL; Schoenbaum G
    Elife; 2016 Mar; 5():. PubMed ID: 26949249
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Abnormal temporal difference reward-learning signals in major depression.
    Kumar P; Waiter G; Ahearn T; Milders M; Reid I; Steele JD
    Brain; 2008 Aug; 131(Pt 8):2084-93. PubMed ID: 18579575
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
    Frémaux N; Sprekeler H; Gerstner W
    PLoS Comput Biol; 2013 Apr; 9(4):e1003024. PubMed ID: 23592970
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Dopamine, uncertainty and TD learning.
    Niv Y; Duff MO; Dayan P
    Behav Brain Funct; 2005 May; 1():6. PubMed ID: 15953384
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Anticipatory responses of dopamine neurons and cortical neurons reproduced by internal model.
    Suri RE
    Exp Brain Res; 2001 Sep; 140(2):234-40. PubMed ID: 11521155
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Dynamic resource allocation during reinforcement learning accounts for ramping and phasic dopamine activity.
    Song MR; Lee SW
    Neural Netw; 2020 Jun; 126():95-107. PubMed ID: 32203877
    [TBL] [Abstract][Full Text] [Related]  

  • 28. A Neural Circuit Mechanism for the Involvements of Dopamine in Effort-Related Choices: Decay of Learned Values, Secondary Effects of Depletion, and Calculation of Temporal Difference Error.
    Morita K; Kato A
    eNeuro; 2018; 5(1):. PubMed ID: 29468191
    [TBL] [Abstract][Full Text] [Related]  

  • 29. The short-latency dopamine signal: a role in discovering novel actions?
    Redgrave P; Gurney K
    Nat Rev Neurosci; 2006 Dec; 7(12):967-75. PubMed ID: 17115078
    [TBL] [Abstract][Full Text] [Related]  

  • 30. A spiking neural model for stable reinforcement of synapses based on multiple distal rewards.
    O'Brien MJ; Srinivasa N
    Neural Comput; 2013 Jan; 25(1):123-56. PubMed ID: 23020112
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Self-organizing neural systems based on predictive learning.
    Rao RP; Sejnowski TJ
    Philos Trans A Math Phys Eng Sci; 2003 Jun; 361(1807):1149-75. PubMed ID: 12816605
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Tamping Ramping: Algorithmic, Implementational, and Computational Explanations of Phasic Dopamine Signals in the Accumbens.
    Lloyd K; Dayan P
    PLoS Comput Biol; 2015 Dec; 11(12):e1004622. PubMed ID: 26699940
    [TBL] [Abstract][Full Text] [Related]  

  • 33. The cost of obtaining rewards enhances the reward prediction error signal of midbrain dopamine neurons.
    Tanaka S; O'Doherty JP; Sakagami M
    Nat Commun; 2019 Aug; 10(1):3674. PubMed ID: 31417077
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Theory meets pigeons: the influence of reward-magnitude on discrimination-learning.
    Rose J; Schmidt R; Grabemann M; Güntürkün O
    Behav Brain Res; 2009 Mar; 198(1):125-9. PubMed ID: 19041347
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Time, Not Size, Matters for Striatal Reward Predictions to Dopamine.
    Burke CJ; Tobler PN
    Neuron; 2016 Jul; 91(1):8-11. PubMed ID: 27387646
    [TBL] [Abstract][Full Text] [Related]  

  • 36. TD models of reward predictive responses in dopamine neurons.
    Suri RE
    Neural Netw; 2002; 15(4-6):523-33. PubMed ID: 12371509
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time.
    Cone I; Clopath C; Shouval HZ
    Res Sq; 2023 Sep; ():. PubMed ID: 37790466
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Splitting the difference: how does the brain code reward episodes?
    Knutson B; Wimmer GE
    Ann N Y Acad Sci; 2007 May; 1104():54-69. PubMed ID: 17416922
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Dopamine neuronal responses in monkeys performing visually cued reward schedules.
    Ravel S; Richmond BJ
    Eur J Neurosci; 2006 Jul; 24(1):277-90. PubMed ID: 16882024
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Heterarchical reinforcement-learning model for integration of multiple cortico-striatal loops: fMRI examination in stimulus-action-reward association learning.
    Haruno M; Kawato M
    Neural Netw; 2006 Oct; 19(8):1242-54. PubMed ID: 16987637
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 12.