These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
5. Minimax Optimal Bandits for Heavy Tail Rewards. Lee K; Lim S IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5280-5294. PubMed ID: 36103434 [TBL] [Abstract][Full Text] [Related]
6. Inference for Batched Bandits. Zhang KW; Janson L; Murphy SA Adv Neural Inf Process Syst; 2020 Dec; 33():9818-9829. PubMed ID: 35002190 [TBL] [Abstract][Full Text] [Related]
7. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials. Varatharajah Y; Berry B Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456 [TBL] [Abstract][Full Text] [Related]
8. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward. Yang S; Gao Y IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408 [TBL] [Abstract][Full Text] [Related]
9. Wi-Fi Assisted Contextual Multi-Armed Bandit for Neighbor Discovery and Selection in Millimeter Wave Device to Device Communications. Hashima S; Hatano K; Kasban H; Mahmoud Mohamed E Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33920717 [TBL] [Abstract][Full Text] [Related]
10. Cascaded Algorithm Selection With Extreme-Region UCB Bandit. Hu YQ; Liu XH; Li SQ; Yu Y IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6782-6794. PubMed ID: 34232866 [TBL] [Abstract][Full Text] [Related]
11. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison. Flynn H; Reeb D; Kandemir M; Peters J IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872 [TBL] [Abstract][Full Text] [Related]
12. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints. Huang H; Shen L; Ye D; Liu W IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964 [TBL] [Abstract][Full Text] [Related]
14. An Efficient Algorithm for Deep Stochastic Contextual Bandits. Zhu T; Liang G; Zhu C; Li H; Bi J Proc AAAI Conf Artif Intell; 2021 Feb; 35(12):11193-11201. PubMed ID: 34745766 [TBL] [Abstract][Full Text] [Related]
15. A Thompson Sampling Algorithm With Logarithmic Regret for Unimodal Gaussian Bandit. Yang L; Li Z; Hu Z; Ruan S; Pan G IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5332-5341. PubMed ID: 37527328 [TBL] [Abstract][Full Text] [Related]