These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

200 related articles for article (PubMed ID: 30995214)

  • 41. Novelty and Inductive Generalization in Human Reinforcement Learning.
    Gershman SJ; Niv Y
    Top Cogn Sci; 2015 Jul; 7(3):391-415. PubMed ID: 25808176
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Reinforcement learning and Bayesian inference provide complementary models for the unique advantage of adolescents in stochastic reversal.
    Eckstein MK; Master SL; Dahl RE; Wilbrecht L; Collins AGE
    Dev Cogn Neurosci; 2022 Jun; 55():101106. PubMed ID: 35537273
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Flexibility to contingency changes distinguishes habitual and goal-directed strategies in humans.
    Lee JJ; Keramati M
    PLoS Comput Biol; 2017 Sep; 13(9):e1005753. PubMed ID: 28957319
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Supervised-actor-critic reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units.
    Yu C; Ren G; Dong Y
    BMC Med Inform Decis Mak; 2020 Jul; 20(Suppl 3):124. PubMed ID: 32646412
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning.
    Costa VD; Dal Monte O; Lucas DR; Murray EA; Averbeck BB
    Neuron; 2016 Oct; 92(2):505-517. PubMed ID: 27720488
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Reward-modulated Hebbian learning of decision making.
    Pfeiffer M; Nessler B; Douglas RJ; Maass W
    Neural Comput; 2010 Jun; 22(6):1399-444. PubMed ID: 20141476
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Spontaneous eye blink rate predicts individual differences in exploration and exploitation during reinforcement learning.
    Van Slooten JC; Jahfari S; Theeuwes J
    Sci Rep; 2019 Nov; 9(1):17436. PubMed ID: 31758031
    [TBL] [Abstract][Full Text] [Related]  

  • 48. The computational roots of positivity and confirmation biases in reinforcement learning.
    Palminteri S; Lebreton M
    Trends Cogn Sci; 2022 Jul; 26(7):607-621. PubMed ID: 35662490
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Bayesian cue integration as a developmental outcome of reward mediated learning.
    Weisswange TH; Rothkopf CA; Rodemann T; Triesch J
    PLoS One; 2011; 6(7):e21575. PubMed ID: 21750717
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Reward maximization justifies the transition from sensory selection at childhood to sensory integration at adulthood.
    Daee P; Mirian MS; Ahmadabadi MN
    PLoS One; 2014; 9(7):e103143. PubMed ID: 25058591
    [TBL] [Abstract][Full Text] [Related]  

  • 51. Impaired adaptation of learning to contingency volatility in internalizing psychopathology.
    Gagne C; Zika O; Dayan P; Bishop SJ
    Elife; 2020 Dec; 9():. PubMed ID: 33350387
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Episodic memory governs choices: An RNN-based reinforcement learning model for decision-making task.
    Zhang X; Liu L; Long G; Jiang J; Liu S
    Neural Netw; 2021 Feb; 134():1-10. PubMed ID: 33276194
    [TBL] [Abstract][Full Text] [Related]  

  • 53. Computational approaches to modeling gambling behaviour: Opportunities for understanding disordered gambling.
    Hales CA; Clark L; Winstanley CA
    Neurosci Biobehav Rev; 2023 Apr; 147():105083. PubMed ID: 36758827
    [TBL] [Abstract][Full Text] [Related]  

  • 54. The actor-critic learning is behind the matching law: matching versus optimal behaviors.
    Sakai Y; Fukai T
    Neural Comput; 2008 Jan; 20(1):227-51. PubMed ID: 18045007
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Model-based spatial navigation in the hippocampus-ventral striatum circuit: A computational analysis.
    Stoianov IP; Pennartz CMA; Lansink CS; Pezzulo G
    PLoS Comput Biol; 2018 Sep; 14(9):e1006316. PubMed ID: 30222746
    [TBL] [Abstract][Full Text] [Related]  

  • 56. A simple model for learning in volatile environments.
    Piray P; Daw ND
    PLoS Comput Biol; 2020 Jul; 16(7):e1007963. PubMed ID: 32609755
    [TBL] [Abstract][Full Text] [Related]  

  • 57. Energy-efficient and damage-recovery slithering gait design for a snake-like robot based on reinforcement learning and inverse reinforcement learning.
    Bing Z; Lemke C; Cheng L; Huang K; Knoll A
    Neural Netw; 2020 Sep; 129():323-333. PubMed ID: 32593929
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Dual learning processes underlying human decision-making in reversal learning tasks: functional significance and evidence from the model fit to human behavior.
    Bai Y; Katahira K; Ohira H
    Front Psychol; 2014; 5():871. PubMed ID: 25161635
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Structure learning in human sequential decision-making.
    Acuña DE; Schrater P
    PLoS Comput Biol; 2010 Dec; 6(12):e1001003. PubMed ID: 21151963
    [TBL] [Abstract][Full Text] [Related]  

  • 60. A reinforcement learning approach to instrumental contingency degradation in rats.
    Dutech A; Coutureau E; Marchand AR
    J Physiol Paris; 2011; 105(1-3):36-44. PubMed ID: 21907801
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.