These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

155 related articles for article (PubMed ID: 29994080)

  • 1. An Online Minimax Optimal Algorithm for Adversarial Multiarmed Bandit Problem.
    Gokcesu K; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2018 Nov; 29(11):5565-5580. PubMed ID: 29994080
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures.
    Mohaghegh Neyshabouri M; Gokcesu K; Gokcesu H; Ozkan H; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2019 Mar; 30(3):923-937. PubMed ID: 30072350
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Online Density Estimation of Nonstationary Sources Using Exponential Family of Distributions.
    Gokcesu K; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2018 Sep; 29(9):4473-4478. PubMed ID: 28920910
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Kuroki Y; Xu L; Miyauchi A; Honda J; Sugiyama M
    Neural Comput; 2020 Sep; 32(9):1733-1773. PubMed ID: 32687769
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Thompson Sampling Algorithm With Logarithmic Regret for Unimodal Gaussian Bandit.
    Yang L; Li Z; Hu Z; Ruan S; Pan G
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5332-5341. PubMed ID: 37527328
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multiarmed Bandit Algorithms on Zynq System-on-Chip: Go Frequentist or Bayesian?
    Santosh SVS; Darak SJ
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2602-2615. PubMed ID: 35853057
    [TBL] [Abstract][Full Text] [Related]  

  • 9. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
    Flynn H; Reeb D; Kandemir M; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Greedy Methods, Randomization Approaches, and Multiarm Bandit Algorithms for Efficient Sparsity-Constrained Optimization.
    Rakotomamonjy A; Koco S; Ralaivola L
    IEEE Trans Neural Netw Learn Syst; 2017 Nov; 28(11):2789-2802. PubMed ID: 28113680
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Nash Equilibrium of Social-Learning Agents in a Restless Multiarmed Bandit Game.
    Nakayama K; Hisakado M; Mori S
    Sci Rep; 2017 May; 7(1):1937. PubMed ID: 28512339
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Self-Unaware Adversarial Multi-Armed Bandits With Switching Costs.
    Alipour-Fanid A; Dabaghchian M; Zeng K
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; 34(6):2908-2922. PubMed ID: 34587093
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A unified approach to universal prediction: generalized upper and lower bounds.
    Vanli ND; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2015 Mar; 26(3):646-51. PubMed ID: 25720015
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems.
    Sabar NR; Ayob M; Kendall G; Qu R
    IEEE Trans Cybern; 2015 Feb; 45(2):217-28. PubMed ID: 24951713
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Intelligent Task Caching in Edge Cloud via Bandit Learning.
    Miao Y; Hao Y; Chen M; Gharavi H; Hwang K
    IEEE Trans Netw Sci Eng; 2021; 8(1):. PubMed ID: 34409117
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Online Learning Algorithm for Distributed Convex Optimization With Time-Varying Coupled Constraints and Bandit Feedback.
    Li J; Gu C; Wu Z; Huang T
    IEEE Trans Cybern; 2022 Feb; 52(2):1009-1020. PubMed ID: 32452789
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Minimax Optimal Bandits for Heavy Tail Rewards.
    Lee K; Lim S
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5280-5294. PubMed ID: 36103434
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Channel selection based on trust and multiarmed bandit in multiuser, multichannel cognitive radio networks.
    Zeng F; Shen X
    ScientificWorldJournal; 2014; 2014():916156. PubMed ID: 24711741
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Multiarmed Bandit Approach to Adaptive Water Quality Management.
    Martin DM; Johnson FA
    Integr Environ Assess Manag; 2020 Nov; 16(6):841-852. PubMed ID: 32584467
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.