These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

253 related articles for article (PubMed ID: 32059133)

  • 1. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 2. It's new, but is it good? How generalization and uncertainty guide the exploration of novel options.
    Stojić H; Schulz E; P Analytis P; Speekenbrink M
    J Exp Psychol Gen; 2020 Oct; 149(10):1878-1907. PubMed ID: 32191080
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Putting bandits into context: How function learning supports decision making.
    Schulz E; Konstantinidis E; Speekenbrink M
    J Exp Psychol Learn Mem Cogn; 2018 Jun; 44(6):927-943. PubMed ID: 29130693
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Uncertainty and exploration in a restless bandit problem.
    Speekenbrink M; Konstantinidis E
    Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Similarities and differences in spatial and non-spatial cognitive maps.
    Wu CM; Schulz E; Garvert MM; Meder B; Schuck NW
    PLoS Comput Biol; 2020 Sep; 16(9):e1008149. PubMed ID: 32903264
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Sex differences in learning from exploration.
    Chen CS; Knep E; Han A; Ebitz RB; Grissom NM
    Elife; 2021 Nov; 10():. PubMed ID: 34796870
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Development of directed and random exploration in children.
    Meder B; Wu CM; Schulz E; Ruggeri A
    Dev Sci; 2021 Jul; 24(4):e13095. PubMed ID: 33539647
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Revisiting the Role of Uncertainty-Driven Exploration in a (Perceived) Non-Stationary World.
    Guo D; Yu AJ
    Cogsci; 2021 Jul; 43():2045-2051. PubMed ID: 34368809
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Dopamine blockade impairs the exploration-exploitation trade-off in rats.
    Cinotti F; Fresno V; Aklil N; Coutureau E; Girard B; Marchand AR; Khamassi M
    Sci Rep; 2019 May; 9(1):6770. PubMed ID: 31043685
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Generalization guides human exploration in vast decision spaces.
    Wu CM; Schulz E; Speekenbrink M; Nelson JD; Meder B
    Nat Hum Behav; 2018 Dec; 2(12):915-924. PubMed ID: 30988442
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Humans adaptively resolve the explore-exploit dilemma under cognitive constraints: Evidence from a multi-armed bandit task.
    Brown VM; Hallquist MN; Frank MJ; Dombrovski AY
    Cognition; 2022 Dec; 229():105233. PubMed ID: 35917612
    [TBL] [Abstract][Full Text] [Related]  

  • 13. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Generalization and Search in Risky Environments.
    Schulz E; Wu CM; Huys QJM; Krause A; Speekenbrink M
    Cogn Sci; 2018 Nov; 42(8):2592-2620. PubMed ID: 30390325
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Dynamics of visual attention in exploration and exploitation for reward-guided adjustment tasks.
    Higashi H
    Conscious Cogn; 2024 Aug; 123():103724. PubMed ID: 38996747
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Skilled bandits: Learning to choose in a reactive world.
    Hotaling JM; Navarro DJ; Newell BR
    J Exp Psychol Learn Mem Cogn; 2021 Jun; 47(6):879-905. PubMed ID: 33252926
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Placing Approach-Avoidance Conflict Within the Framework of Multi-objective Reinforcement Learning.
    Enkhtaivan E; Nishimura J; Cochran A
    Bull Math Biol; 2023 Oct; 85(11):116. PubMed ID: 37837562
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Searching for Rewards Like a Child Means Less Generalization and More Directed Exploration.
    Schulz E; Wu CM; Ruggeri A; Meder B
    Psychol Sci; 2019 Nov; 30(11):1561-1572. PubMed ID: 31652093
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Learning the value of information and reward over time when solving exploration-exploitation problems.
    Cogliati Dezza I; Yu AJ; Cleeremans A; Alexander W
    Sci Rep; 2017 Dec; 7(1):16919. PubMed ID: 29209058
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 13.