These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

97 related articles for article (PubMed ID: 34780337)

  • 1. Generalized Contextual Bandits With Latent Features: Algorithms and Applications.
    Xu X; Xie H; Lui JCS
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4763-4775. PubMed ID: 34780337
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 3. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits.
    Ramesh A; Rauber P; Conserva M; Schmidhuber J
    Neural Comput; 2022 Oct; 34(11):2232-2272. PubMed ID: 36112923
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Thompson Sampling Algorithm With Logarithmic Regret for Unimodal Gaussian Bandit.
    Yang L; Li Z; Hu Z; Ruan S; Pan G
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5332-5341. PubMed ID: 37527328
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis.
    Jose ST; Moothedath S
    Entropy (Basel); 2024 Jul; 26(7):. PubMed ID: 39056968
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 9. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Self-Unaware Adversarial Multi-Armed Bandits With Switching Costs.
    Alipour-Fanid A; Dabaghchian M; Zeng K
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; 34(6):2908-2922. PubMed ID: 34587093
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Per-Round Knapsack-Constrained Linear Submodular Bandits.
    Yu B; Fang M; Tao D
    Neural Comput; 2016 Dec; 28(12):2757-2789. PubMed ID: 27626968
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Building an emotion regulation recommender algorithm for socially anxious individuals using contextual bandits.
    Beltzer ML; Ameko MK; Daniel KE; Daros AR; Boukhechba M; Barnes LE; Teachman BA
    Br J Clin Psychol; 2022 Jan; 61 Suppl 1():51-72. PubMed ID: 33583059
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models.
    Elwood A; Leonardi M; Mohamed A; Rozza A
    Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832555
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Multi-Agent Thompson Sampling for Bandit Applications with Sparse Neighbourhood Structures.
    Verstraeten T; Bargiacchi E; Libin PJK; Helsen J; Roijers DM; Nowé A
    Sci Rep; 2020 Apr; 10(1):6728. PubMed ID: 32317732
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures.
    Mohaghegh Neyshabouri M; Gokcesu K; Gokcesu H; Ozkan H; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2019 Mar; 30(3):923-937. PubMed ID: 30072350
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Laplacian-P-splines for Bayesian inference in the mixture cure model.
    Gressani O; Faes C; Hens N
    Stat Med; 2022 Jun; 41(14):2602-2626. PubMed ID: 35699121
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Markov chain Monte Carlo methods for hierarchical clustering of dynamic causal models.
    Yao Y; Stephan KE
    Hum Brain Mapp; 2021 Jul; 42(10):2973-2989. PubMed ID: 33826194
    [TBL] [Abstract][Full Text] [Related]  

  • 20. PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison.
    Flynn H; Reeb D; Kandemir M; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15308-15327. PubMed ID: 37594872
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.