These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

273 related articles for article (PubMed ID: 25899069)

  • 1. Uncertainty and exploration in a restless bandit problem.
    Speekenbrink M; Konstantinidis E
    Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Dopamine blockade impairs the exploration-exploitation trade-off in rats.
    Cinotti F; Fresno V; Aklil N; Coutureau E; Girard B; Marchand AR; Khamassi M
    Sci Rep; 2019 May; 9(1):6770. PubMed ID: 31043685
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Humans adaptively resolve the explore-exploit dilemma under cognitive constraints: Evidence from a multi-armed bandit task.
    Brown VM; Hallquist MN; Frank MJ; Dombrovski AY
    Cognition; 2022 Dec; 229():105233. PubMed ID: 35917612
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Sex differences in learning from exploration.
    Chen CS; Knep E; Han A; Ebitz RB; Grissom NM
    Elife; 2021 Nov; 10():. PubMed ID: 34796870
    [TBL] [Abstract][Full Text] [Related]  

  • 6. It's new, but is it good? How generalization and uncertainty guide the exploration of novel options.
    Stojić H; Schulz E; P Analytis P; Speekenbrink M
    J Exp Psychol Gen; 2020 Oct; 149(10):1878-1907. PubMed ID: 32191080
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Putting bandits into context: How function learning supports decision making.
    Schulz E; Konstantinidis E; Speekenbrink M
    J Exp Psychol Learn Mem Cogn; 2018 Jun; 44(6):927-943. PubMed ID: 29130693
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Dopaminergic modulation of the exploration/exploitation trade-off in human decision-making.
    Chakroun K; Mathar D; Wiehler A; Ganzer F; Peters J
    Elife; 2020 Jun; 9():. PubMed ID: 32484779
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Searching for Rewards Like a Child Means Less Generalization and More Directed Exploration.
    Schulz E; Wu CM; Ruggeri A; Meder B
    Psychol Sci; 2019 Nov; 30(11):1561-1572. PubMed ID: 31652093
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Transcranial Stimulation over Frontopolar Cortex Elucidates the Choice Attributes and Neural Mechanisms Used to Resolve Exploration-Exploitation Trade-Offs.
    Raja Beharelle A; Polanía R; Hare TA; Ruff CC
    J Neurosci; 2015 Oct; 35(43):14544-56. PubMed ID: 26511245
    [TBL] [Abstract][Full Text] [Related]  

  • 12. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Smoking and the bandit: a preliminary study of smoker and nonsmoker differences in exploratory behavior measured with a multiarmed bandit task.
    Addicott MA; Pearson JM; Wilson J; Platt ML; McClernon FJ
    Exp Clin Psychopharmacol; 2013 Feb; 21(1):66-73. PubMed ID: 23245198
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Generalization and Search in Risky Environments.
    Schulz E; Wu CM; Huys QJM; Krause A; Speekenbrink M
    Cogn Sci; 2018 Nov; 42(8):2592-2620. PubMed ID: 30390325
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.
    Aono M; Kim SJ; Hara M; Munakata T
    Biosystems; 2014 Mar; 117():1-9. PubMed ID: 24384066
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning the value of information and reward over time when solving exploration-exploitation problems.
    Cogliati Dezza I; Yu AJ; Cleeremans A; Alexander W
    Sci Rep; 2017 Dec; 7(1):16919. PubMed ID: 29209058
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Development of directed and random exploration in children.
    Meder B; Wu CM; Schulz E; Ruggeri A
    Dev Sci; 2021 Jul; 24(4):e13095. PubMed ID: 33539647
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Novelty and uncertainty differentially drive exploration across development.
    Nussenbaum K; Martin RE; Maulhardt S; Yang YJ; Bizzell-Hatcher G; Bhatt NS; Koenig M; Rosenbaum GM; O'Doherty JP; Cockburn J; Hartley CA
    Elife; 2023 Aug; 12():. PubMed ID: 37585251
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Decision-making without a brain: how an amoeboid organism solves the two-armed bandit.
    Reid CR; MacDonald H; Mann RP; Marshall JA; Latty T; Garnier S
    J R Soc Interface; 2016 Jun; 13(119):. PubMed ID: 27278359
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 14.