These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
221 related articles for article (PubMed ID: 18624657)
21. Involvement of basal ganglia and orbitofrontal cortex in goal-directed behavior. Hollerman JR; Tremblay L; Schultz W Prog Brain Res; 2000; 126():193-215. PubMed ID: 11105648 [TBL] [Abstract][Full Text] [Related]
22. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework. Sadacca BF; Jones JL; Schoenbaum G Elife; 2016 Mar; 5():. PubMed ID: 26949249 [TBL] [Abstract][Full Text] [Related]
26. Anticipatory responses of dopamine neurons and cortical neurons reproduced by internal model. Suri RE Exp Brain Res; 2001 Sep; 140(2):234-40. PubMed ID: 11521155 [TBL] [Abstract][Full Text] [Related]
27. Dynamic resource allocation during reinforcement learning accounts for ramping and phasic dopamine activity. Song MR; Lee SW Neural Netw; 2020 Jun; 126():95-107. PubMed ID: 32203877 [TBL] [Abstract][Full Text] [Related]
28. A Neural Circuit Mechanism for the Involvements of Dopamine in Effort-Related Choices: Decay of Learned Values, Secondary Effects of Depletion, and Calculation of Temporal Difference Error. Morita K; Kato A eNeuro; 2018; 5(1):. PubMed ID: 29468191 [TBL] [Abstract][Full Text] [Related]
29. The short-latency dopamine signal: a role in discovering novel actions? Redgrave P; Gurney K Nat Rev Neurosci; 2006 Dec; 7(12):967-75. PubMed ID: 17115078 [TBL] [Abstract][Full Text] [Related]
30. A spiking neural model for stable reinforcement of synapses based on multiple distal rewards. O'Brien MJ; Srinivasa N Neural Comput; 2013 Jan; 25(1):123-56. PubMed ID: 23020112 [TBL] [Abstract][Full Text] [Related]
31. Self-organizing neural systems based on predictive learning. Rao RP; Sejnowski TJ Philos Trans A Math Phys Eng Sci; 2003 Jun; 361(1807):1149-75. PubMed ID: 12816605 [TBL] [Abstract][Full Text] [Related]
32. Tamping Ramping: Algorithmic, Implementational, and Computational Explanations of Phasic Dopamine Signals in the Accumbens. Lloyd K; Dayan P PLoS Comput Biol; 2015 Dec; 11(12):e1004622. PubMed ID: 26699940 [TBL] [Abstract][Full Text] [Related]
33. The cost of obtaining rewards enhances the reward prediction error signal of midbrain dopamine neurons. Tanaka S; O'Doherty JP; Sakagami M Nat Commun; 2019 Aug; 10(1):3674. PubMed ID: 31417077 [TBL] [Abstract][Full Text] [Related]
34. Theory meets pigeons: the influence of reward-magnitude on discrimination-learning. Rose J; Schmidt R; Grabemann M; Güntürkün O Behav Brain Res; 2009 Mar; 198(1):125-9. PubMed ID: 19041347 [TBL] [Abstract][Full Text] [Related]
35. Time, Not Size, Matters for Striatal Reward Predictions to Dopamine. Burke CJ; Tobler PN Neuron; 2016 Jul; 91(1):8-11. PubMed ID: 27387646 [TBL] [Abstract][Full Text] [Related]
36. TD models of reward predictive responses in dopamine neurons. Suri RE Neural Netw; 2002; 15(4-6):523-33. PubMed ID: 12371509 [TBL] [Abstract][Full Text] [Related]
38. Splitting the difference: how does the brain code reward episodes? Knutson B; Wimmer GE Ann N Y Acad Sci; 2007 May; 1104():54-69. PubMed ID: 17416922 [TBL] [Abstract][Full Text] [Related]