These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 36832555)

  • 1. Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models.
    Elwood A; Leonardi M; Mohamed A; Rozza A
    Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832555
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits.
    Ramesh A; Rauber P; Conserva M; Schmidhuber J
    Neural Comput; 2022 Oct; 34(11):2232-2272. PubMed ID: 36112923
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Mating with Multi-Armed Bandits: Reinforcement Learning Models of Human Mate Search.
    Conroy-Beam D
    Open Mind (Camb); 2024; 8():995-1011. PubMed ID: 39170796
    [TBL] [Abstract][Full Text] [Related]  

  • 8. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Signal detection models as contextual bandits.
    Sherratt TN; O'Neill E
    R Soc Open Sci; 2023 Jun; 10(6):230157. PubMed ID: 37351497
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Application of multi-armed bandits to dose-finding clinical designs.
    Kojima M
    Artif Intell Med; 2023 Dec; 146():102713. PubMed ID: 38042600
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Wi-Fi Assisted Contextual Multi-Armed Bandit for Neighbor Discovery and Selection in Millimeter Wave Device to Device Communications.
    Hashima S; Hatano K; Kasban H; Mahmoud Mohamed E
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33920717
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Action Centered Contextual Bandits.
    Greenewald K; Tewari A; Klasnja P; Murphy S
    Adv Neural Inf Process Syst; 2017 Dec; 30():5973-5981. PubMed ID: 29225449
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Uncertainty and exploration in a restless bandit problem.
    Speekenbrink M; Konstantinidis E
    Top Cogn Sci; 2015 Apr; 7(2):351-67. PubMed ID: 25899069
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Cognitively inspired reinforcement learning architecture and its application to giant-swing motion control.
    Uragami D; Takahashi T; Matsuo Y
    Biosystems; 2014 Feb; 116():1-9. PubMed ID: 24296286
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Revisiting the Role of Uncertainty-Driven Exploration in a (Perceived) Non-Stationary World.
    Guo D; Yu AJ
    Cogsci; 2021 Jul; 43():2045-2051. PubMed ID: 34368809
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data.
    Chen X; Lee KM; Villar SS; Robertson DS
    PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Generalized Contextual Bandits With Latent Features: Algorithms and Applications.
    Xu X; Xie H; Lui JCS
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4763-4775. PubMed ID: 34780337
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits.
    Li B; Yeung CH
    Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.