These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

133 related articles for article (PubMed ID: 36094920)

  • 1. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data.
    Chen X; Lee KM; Villar SS; Robertson DS
    PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 3. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.
    Crider K; Williams J; Qi YP; Gutman J; Yeung L; Mai C; Finkelstain J; Mehta S; Pons-Duran C; Menéndez C; Moraleda C; Rogers L; Daniels K; Green P
    Cochrane Database Syst Rev; 2022 Feb; 2(2022):. PubMed ID: 36321557
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.
    Aono M; Kim SJ; Hara M; Munakata T
    Biosystems; 2014 Mar; 117():1-9. PubMed ID: 24384066
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Covariate-adjusted response-adaptive randomization for multi-arm clinical trials using a modified forward looking Gittins index rule.
    Villar SS; Rosenberger WF
    Biometrics; 2018 Mar; 74(1):49-57. PubMed ID: 28682442
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Introducing a Method for Calculating the Allocation of Attention in a Cognitive "Two-Armed Bandit" Procedure: Probability Matching Gives Way to Maximizing.
    Heyman GM; Grisanzio KA; Liang V
    Front Psychol; 2016; 7():223. PubMed ID: 27014109
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Kuroki Y; Xu L; Miyauchi A; Honda J; Sugiyama M
    Neural Comput; 2020 Sep; 32(9):1733-1773. PubMed ID: 32687769
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Arm order recognition in multi-armed bandit problem with laser chaos time series.
    Narisawa N; Chauvet N; Hasegawa M; Naruse M
    Sci Rep; 2021 Feb; 11(1):4459. PubMed ID: 33627692
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Photonic decision-making for arbitrary-number-armed bandit problem utilizing parallel chaos generation.
    Peng J; Jiang N; Zhao A; Liu S; Zhang Y; Qiu K; Zhang Q
    Opt Express; 2021 Aug; 29(16):25290-25301. PubMed ID: 34614862
    [TBL] [Abstract][Full Text] [Related]  

  • 13. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Bandit Algorithm Driven by a Classical Random Walk and a Quantum Walk.
    Yamagami T; Segawa E; Mihana T; Röhm A; Horisaki R; Naruse M
    Entropy (Basel); 2023 May; 25(6):. PubMed ID: 37372187
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Uncertainty and exploration in a restless bandit problem.
    Speekenbrink M; Konstantinidis E
    Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Gateway Selection in Millimeter Wave UAV Wireless Networks Using Multi-Player Multi-Armed Bandit.
    Mohamed EM; Hashima S; Aldosary A; Hatano K; Abdelghany MA
    Sensors (Basel); 2020 Jul; 20(14):. PubMed ID: 32708559
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Risk-aware multi-armed bandit problem with application to portfolio selection.
    Huo X; Fu F
    R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A response-adaptive randomization procedure for multi-armed clinical trials with normally distributed outcomes.
    Williamson SF; Villar SS
    Biometrics; 2020 Mar; 76(1):197-209. PubMed ID: 31322732
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits.
    Li B; Yeung CH
    Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Online Hard Patch Mining Using Shape Models and Bandit Algorithm for Multi-Organ Segmentation.
    He J; Zhou G; Zhou S; Chen Y
    IEEE J Biomed Health Inform; 2022 Jun; 26(6):2648-2659. PubMed ID: 34928809
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.