These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

207 related articles for article (PubMed ID: 18624657)

  • 41. Heterarchical reinforcement-learning model for integration of multiple cortico-striatal loops: fMRI examination in stimulus-action-reward association learning.
    Haruno M; Kawato M
    Neural Netw; 2006 Oct; 19(8):1242-54. PubMed ID: 16987637
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Neural coding of reward-prediction error signals during classical conditioning with attractive faces.
    Bray S; O'Doherty J
    J Neurophysiol; 2007 Apr; 97(4):3036-45. PubMed ID: 17303809
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Dopamine neurons learn to encode the long-term value of multiple future rewards.
    Enomoto K; Matsumoto N; Nakai S; Satoh T; Sato TK; Ueda Y; Inokawa H; Haruno M; Kimura M
    Proc Natl Acad Sci U S A; 2011 Sep; 108(37):15462-7. PubMed ID: 21896766
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Reward prediction error computation in the pedunculopontine tegmental nucleus neurons.
    Kobayashi Y; Okada K
    Ann N Y Acad Sci; 2007 May; 1104():310-23. PubMed ID: 17344541
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Dopamine reward prediction errors reflect hidden-state inference across time.
    Starkweather CK; Babayan BM; Uchida N; Gershman SJ
    Nat Neurosci; 2017 Apr; 20(4):581-589. PubMed ID: 28263301
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Addiction as a computational process gone awry.
    Redish AD
    Science; 2004 Dec; 306(5703):1944-7. PubMed ID: 15591205
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Reward-dependent learning in neuronal networks for planning and decision making.
    Dehaene S; Changeux JP
    Prog Brain Res; 2000; 126():217-29. PubMed ID: 11105649
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Temporal difference model reproduces anticipatory neural activity.
    Suri RE; Schultz W
    Neural Comput; 2001 Apr; 13(4):841-62. PubMed ID: 11255572
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Dopamine Neurons Respond to Errors in the Prediction of Sensory Features of Expected Rewards.
    Takahashi YK; Batchelor HM; Liu B; Khanna A; Morales M; Schoenbaum G
    Neuron; 2017 Sep; 95(6):1395-1405.e3. PubMed ID: 28910622
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Different neural correlates of reward expectation and reward expectation error in the putamen and caudate nucleus during stimulus-action-reward association learning.
    Haruno M; Kawato M
    J Neurophysiol; 2006 Feb; 95(2):948-59. PubMed ID: 16192338
    [TBL] [Abstract][Full Text] [Related]  

  • 51. Neurons in dopamine-rich areas of the rat medial midbrain predominantly encode the outcome-related rather than behavioural switching properties of conditioned stimuli.
    Wilson DI; Bowman EM
    Eur J Neurosci; 2006 Jan; 23(1):205-18. PubMed ID: 16420430
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Subcortical control of dopamine neurons: the good, the bad and the unexpected.
    Stewart RD; Dommett EJ
    Brain Res Bull; 2006 Dec; 71(1-3):1-3. PubMed ID: 17113920
    [TBL] [Abstract][Full Text] [Related]  

  • 53. A Unified Framework for Dopamine Signals across Timescales.
    Kim HR; Malik AN; Mikhael JG; Bech P; Tsutsui-Kimura I; Sun F; Zhang Y; Li Y; Watabe-Uchida M; Gershman SJ; Uchida N
    Cell; 2020 Dec; 183(6):1600-1616.e25. PubMed ID: 33248024
    [TBL] [Abstract][Full Text] [Related]  

  • 54. A neural substrate of prediction and reward.
    Schultz W; Dayan P; Montague PR
    Science; 1997 Mar; 275(5306):1593-9. PubMed ID: 9054347
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Neuronal coding of prediction errors.
    Schultz W; Dickinson A
    Annu Rev Neurosci; 2000; 23():473-500. PubMed ID: 10845072
    [TBL] [Abstract][Full Text] [Related]  

  • 56. A gradual temporal shift of dopamine responses mirrors the progression of temporal difference error in machine learning.
    Amo R; Matias S; Yamanaka A; Tanaka KF; Uchida N; Watabe-Uchida M
    Nat Neurosci; 2022 Aug; 25(8):1082-1092. PubMed ID: 35798979
    [TBL] [Abstract][Full Text] [Related]  

  • 57. Evaluating the TD model of classical conditioning.
    Ludvig EA; Sutton RS; Kehoe EJ
    Learn Behav; 2012 Sep; 40(3):305-19. PubMed ID: 22927003
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Adapting the flow of time with dopamine.
    Mikhael JG; Gershman SJ
    J Neurophysiol; 2019 May; 121(5):1748-1760. PubMed ID: 30864882
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Learning of sequential movements by neural network model with dopamine-like reinforcement signal.
    Suri RE; Schultz W
    Exp Brain Res; 1998 Aug; 121(3):350-4. PubMed ID: 9746140
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Posterior weighted reinforcement learning with state uncertainty.
    Larsen T; Leslie DS; Collins EJ; Bogacz R
    Neural Comput; 2010 May; 22(5):1149-79. PubMed ID: 20100078
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 11.