These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 37527328)

  • 1. A Thompson Sampling Algorithm With Logarithmic Regret for Unimodal Gaussian Bandit.
    Yang L; Li Z; Hu Z; Ruan S; Pan G
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5332-5341. PubMed ID: 37527328
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 5. An Online Minimax Optimal Algorithm for Adversarial Multiarmed Bandit Problem.
    Gokcesu K; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2018 Nov; 29(11):5565-5580. PubMed ID: 29994080
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multiarmed Bandit Algorithms on Zynq System-on-Chip: Go Frequentist or Bayesian?
    Santosh SVS; Darak SJ
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2602-2615. PubMed ID: 35853057
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Multi-Armed Bandit-Based User Network Node Selection.
    Gao Q; Xie Z
    Sensors (Basel); 2024 Jun; 24(13):. PubMed ID: 39000883
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control.
    Zhang W; Mei Y
    Technometrics; 2023; 65(1):33-43. PubMed ID: 36950530
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Generalized Contextual Bandits With Latent Features: Algorithms and Applications.
    Xu X; Xie H; Lui JCS
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4763-4775. PubMed ID: 34780337
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis.
    Jose ST; Moothedath S
    Entropy (Basel); 2024 Jul; 26(7):. PubMed ID: 39056968
    [TBL] [Abstract][Full Text] [Related]  

  • 12. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Self-Unaware Adversarial Multi-Armed Bandits With Switching Costs.
    Alipour-Fanid A; Dabaghchian M; Zeng K
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; 34(6):2908-2922. PubMed ID: 34587093
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Multi-Agent Thompson Sampling for Bandit Applications with Sparse Neighbourhood Structures.
    Verstraeten T; Bargiacchi E; Libin PJK; Helsen J; Roijers DM; Nowé A
    Sci Rep; 2020 Apr; 10(1):6728. PubMed ID: 32317732
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Minimax Optimal Bandits for Heavy Tail Rewards.
    Lee K; Lim S
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5280-5294. PubMed ID: 36103434
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Kuroki Y; Xu L; Miyauchi A; Honda J; Sugiyama M
    Neural Comput; 2020 Sep; 32(9):1733-1773. PubMed ID: 32687769
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data.
    Chen X; Lee KM; Villar SS; Robertson DS
    PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Cascaded Algorithm Selection With Extreme-Region UCB Bandit.
    Hu YQ; Liu XH; Li SQ; Yu Y
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6782-6794. PubMed ID: 34232866
    [TBL] [Abstract][Full Text] [Related]  

  • 19. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
    Flynn H; Reeb D; Kandemir M; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits.
    Li B; Yeung CH
    Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.