These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
6. Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models. Elwood A; Leonardi M; Mohamed A; Rozza A Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832555 [TBL] [Abstract][Full Text] [Related]
7. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm. Kamiura M; Sano K Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871 [TBL] [Abstract][Full Text] [Related]
8. Risk-aware multi-armed bandit problem with application to portfolio selection. Huo X; Fu F R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122 [TBL] [Abstract][Full Text] [Related]
9. Uncertainty and exploration in a restless bandit problem. Speekenbrink M; Konstantinidis E Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069 [TBL] [Abstract][Full Text] [Related]
10. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits. Li B; Yeung CH Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557 [TBL] [Abstract][Full Text] [Related]
11. Revisiting the Role of Uncertainty-Driven Exploration in a (Perceived) Non-Stationary World. Guo D; Yu AJ Cogsci; 2021 Jul; 43():2045-2051. PubMed ID: 34368809 [TBL] [Abstract][Full Text] [Related]
12. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials. Varatharajah Y; Berry B Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456 [TBL] [Abstract][Full Text] [Related]
13. Decision-making without a brain: how an amoeboid organism solves the two-armed bandit. Reid CR; MacDonald H; Mann RP; Marshall JA; Latty T; Garnier S J R Soc Interface; 2016 Jun; 13(119):. PubMed ID: 27278359 [TBL] [Abstract][Full Text] [Related]
14. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem. Aono M; Kim SJ; Hara M; Munakata T Biosystems; 2014 Mar; 117():1-9. PubMed ID: 24384066 [TBL] [Abstract][Full Text] [Related]
15. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data. Chen X; Lee KM; Villar SS; Robertson DS PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920 [TBL] [Abstract][Full Text] [Related]
16. Altered Statistical Learning and Decision-Making in Methamphetamine Dependence: Evidence from a Two-Armed Bandit Task. Harlé KM; Zhang S; Schiff M; Mackey S; Paulus MP; Yu AJ Front Psychol; 2015; 6():1910. PubMed ID: 26733906 [TBL] [Abstract][Full Text] [Related]
17. Mating with Multi-Armed Bandits: Reinforcement Learning Models of Human Mate Search. Conroy-Beam D Open Mind (Camb); 2024; 8():995-1011. PubMed ID: 39170796 [TBL] [Abstract][Full Text] [Related]