These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

133 related articles for article (PubMed ID: 30072350)

  • 1. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures.
    Mohaghegh Neyshabouri M; Gokcesu K; Gokcesu H; Ozkan H; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2019 Mar; 30(3):923-937. PubMed ID: 30072350
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An Online Minimax Optimal Algorithm for Adversarial Multiarmed Bandit Problem.
    Gokcesu K; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2018 Nov; 29(11):5565-5580. PubMed ID: 29994080
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Kuroki Y; Xu L; Miyauchi A; Honda J; Sugiyama M
    Neural Comput; 2020 Sep; 32(9):1733-1773. PubMed ID: 32687769
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Online Anomaly Detection With Bandwidth Optimized Hierarchical Kernel Density Estimators.
    Kerpicci M; Ozkan H; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):4253-4266. PubMed ID: 32853154
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 8. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
    Flynn H; Reeb D; Kandemir M; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Greedy Methods, Randomization Approaches, and Multiarm Bandit Algorithms for Efficient Sparsity-Constrained Optimization.
    Rakotomamonjy A; Koco S; Ralaivola L
    IEEE Trans Neural Netw Learn Syst; 2017 Nov; 28(11):2789-2802. PubMed ID: 28113680
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.
    Vanli ND; Sayin MO; Delibalta I; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2017 Mar; 28(3):546-558. PubMed ID: 26978837
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Risk-aware multi-armed bandit problem with application to portfolio selection.
    Huo X; Fu F
    R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Post-Contextual-Bandit Inference.
    Bibaut A; Chambaz A; Dimakopoulou M; Kallus N; van der Laan M
    Adv Neural Inf Process Syst; 2021 Dec; 34():28548-28559. PubMed ID: 35785105
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Multiarmed Bandit Algorithms on Zynq System-on-Chip: Go Frequentist or Bayesian?
    Santosh SVS; Darak SJ
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2602-2615. PubMed ID: 35853057
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A unified approach to universal prediction: generalized upper and lower bounds.
    Vanli ND; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2015 Mar; 26(3):646-51. PubMed ID: 25720015
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Multi-armed bandit algorithm for sequential experiments of molecular properties with dynamic feature selection.
    Abedin MM; Tabata K; Matsumura Y; Komatsuzaki T
    J Chem Phys; 2024 Jul; 161(1):. PubMed ID: 38958158
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Generalized Contextual Bandits With Latent Features: Algorithms and Applications.
    Xu X; Xie H; Lui JCS
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4763-4775. PubMed ID: 34780337
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Bandit strategies evaluated in the context of clinical trials in rare life-threatening diseases.
    Villar SS
    Probab Eng Inf Sci; 2018 Apr; 32(2):229-245. PubMed ID: 29520124
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.