These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
191 related articles for article (PubMed ID: 28838871)
1. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm. Kamiura M; Sano K Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871 [TBL] [Abstract][Full Text] [Related]
2. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems? Ochi K; Kamiura M Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266 [TBL] [Abstract][Full Text] [Related]
3. An empirical evaluation of active inference in multi-armed bandits. Marković D; Stojić H; Schwöbel S; Kiebel SJ Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043 [TBL] [Abstract][Full Text] [Related]
4. Decision making for large-scale multi-armed bandit problems using bias control of chaotic temporal waveforms in semiconductor lasers. Morijiri K; Mihana T; Kanno K; Naruse M; Uchida A Sci Rep; 2022 May; 12(1):8073. PubMed ID: 35577847 [TBL] [Abstract][Full Text] [Related]
5. Uncertainty and exploration in a restless bandit problem. Speekenbrink M; Konstantinidis E Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069 [TBL] [Abstract][Full Text] [Related]
7. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm. Cavenaghi E; Sottocornola G; Stella F; Zanker M Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028 [TBL] [Abstract][Full Text] [Related]
8. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data. Chen X; Lee KM; Villar SS; Robertson DS PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920 [TBL] [Abstract][Full Text] [Related]
9. Risk-aware multi-armed bandit problem with application to portfolio selection. Huo X; Fu F R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122 [TBL] [Abstract][Full Text] [Related]
11. Decision-making without a brain: how an amoeboid organism solves the two-armed bandit. Reid CR; MacDonald H; Mann RP; Marshall JA; Latty T; Garnier S J R Soc Interface; 2016 Jun; 13(119):. PubMed ID: 27278359 [TBL] [Abstract][Full Text] [Related]
12. Arm order recognition in multi-armed bandit problem with laser chaos time series. Narisawa N; Chauvet N; Hasegawa M; Naruse M Sci Rep; 2021 Feb; 11(1):4459. PubMed ID: 33627692 [TBL] [Abstract][Full Text] [Related]
14. Optimism in Active Learning. Collet T; Pietquin O Comput Intell Neurosci; 2015; 2015():973696. PubMed ID: 26681934 [TBL] [Abstract][Full Text] [Related]
15. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits. Li B; Yeung CH Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557 [TBL] [Abstract][Full Text] [Related]
16. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward. Yang S; Gao Y IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408 [TBL] [Abstract][Full Text] [Related]
18. Humans adaptively resolve the explore-exploit dilemma under cognitive constraints: Evidence from a multi-armed bandit task. Brown VM; Hallquist MN; Frank MJ; Dombrovski AY Cognition; 2022 Dec; 229():105233. PubMed ID: 35917612 [TBL] [Abstract][Full Text] [Related]
19. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison. Flynn H; Reeb D; Kandemir M; Peters J IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872 [TBL] [Abstract][Full Text] [Related]
20. Structure learning in human sequential decision-making. Acuña DE; Schrater P PLoS Comput Biol; 2010 Dec; 6(12):e1001003. PubMed ID: 21151963 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]