These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

233 related articles for article (PubMed ID: 34507043)

  • 21. Signal detection models as contextual bandits.
    Sherratt TN; O'Neill E
    R Soc Open Sci; 2023 Jun; 10(6):230157. PubMed ID: 37351497
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Dopaminergic modulation of the exploration/exploitation trade-off in human decision-making.
    Chakroun K; Mathar D; Wiehler A; Ganzer F; Peters J
    Elife; 2020 Jun; 9():. PubMed ID: 32484779
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Multi-Armed Bandit-Based User Network Node Selection.
    Gao Q; Xie Z
    Sensors (Basel); 2024 Jun; 24(13):. PubMed ID: 39000883
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Decision making for large-scale multi-armed bandit problems using bias control of chaotic temporal waveforms in semiconductor lasers.
    Morijiri K; Mihana T; Kanno K; Naruse M; Uchida A
    Sci Rep; 2022 May; 12(1):8073. PubMed ID: 35577847
    [TBL] [Abstract][Full Text] [Related]  

  • 25. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 26. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
    Flynn H; Reeb D; Kandemir M; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Gateway Selection in Millimeter Wave UAV Wireless Networks Using Multi-Player Multi-Armed Bandit.
    Mohamed EM; Hashima S; Aldosary A; Hatano K; Abdelghany MA
    Sensors (Basel); 2020 Jul; 20(14):. PubMed ID: 32708559
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits.
    Ramesh A; Rauber P; Conserva M; Schmidhuber J
    Neural Comput; 2022 Oct; 34(11):2232-2272. PubMed ID: 36112923
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Application of multi-armed bandits to dose-finding clinical designs.
    Kojima M
    Artif Intell Med; 2023 Dec; 146():102713. PubMed ID: 38042600
    [TBL] [Abstract][Full Text] [Related]  

  • 30. The Perils of Misspecified Priors and Optional Stopping in Multi-Armed Bandits.
    Loecher M
    Front Artif Intell; 2021; 4():715690. PubMed ID: 34308342
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Inference for Batched Bandits.
    Zhang KW; Janson L; Murphy SA
    Adv Neural Inf Process Syst; 2020 Dec; 33():9818-9829. PubMed ID: 35002190
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Bandit Algorithm Driven by a Classical Random Walk and a Quantum Walk.
    Yamagami T; Segawa E; Mihana T; Röhm A; Horisaki R; Naruse M
    Entropy (Basel); 2023 May; 25(6):. PubMed ID: 37372187
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Structure learning in human sequential decision-making.
    Acuña DE; Schrater P
    PLoS Comput Biol; 2010 Dec; 6(12):e1001003. PubMed ID: 21151963
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Uncertainty and Exploration.
    Gershman SJ
    Decision (Wash D C ); 2019 Jul; 6(3):277-286. PubMed ID: 33768122
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning.
    Hodson R; Bassett B; van Hoof C; Rosman B; Solms M; Shock JP; Smith R
    ArXiv; 2023 Aug; ():. PubMed ID: 37645053
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Smoking and the bandit: a preliminary study of smoker and nonsmoker differences in exploratory behavior measured with a multiarmed bandit task.
    Addicott MA; Pearson JM; Wilson J; Platt ML; McClernon FJ
    Exp Clin Psychopharmacol; 2013 Feb; 21(1):66-73. PubMed ID: 23245198
    [TBL] [Abstract][Full Text] [Related]  

  • 37. AdaptiveBandit: A Multi-armed Bandit Framework for Adaptive Sampling in Molecular Simulations.
    Pérez A; Herrera-Nieto P; Doerr S; De Fabritiis G
    J Chem Theory Comput; 2020 Jul; 16(7):4685-4693. PubMed ID: 32539384
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Skilled bandits: Learning to choose in a reactive world.
    Hotaling JM; Navarro DJ; Newell BR
    J Exp Psychol Learn Mem Cogn; 2021 Jun; 47(6):879-905. PubMed ID: 33252926
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Anytime Exploration for Multi-armed Bandits using Confidence Information.
    Jun KS; Nowak R
    JMLR Workshop Conf Proc; 2016 Jun; 48():974-982. PubMed ID: 29541329
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 12.