These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

109 related articles for article (PubMed ID: 31588169)

  • 1. Optimal Query Selection Using Multi-Armed Bandits.
    Koçanaoğulları A; Marghi YM; Akçakaya M; Erdoğmuş D
    IEEE Signal Process Lett; 2018 Dec; 25(12):1870-1874. PubMed ID: 31588169
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Multi-Armed Bandits in Brain-Computer Interfaces.
    Heskebeck F; Bergeling C; Bernhardsson B
    Front Hum Neurosci; 2022; 16():931085. PubMed ID: 35874164
    [TBL] [Abstract][Full Text] [Related]  

  • 3. On Analysis of Active Querying for Recursive State Estimation.
    Koçanaoğulları A; Akçakay M; Erdoğmuş D
    IEEE Signal Process Lett; 2018 Jun; 25(6):743-747. PubMed ID: 31871396
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 5. An Active RBSE Framework to Generate Optimal Stimulus Sequences in a BCI for Spelling.
    Moghadamfalahi M; Akcakaya M; Nezamfar H; Sourati J; Erdogmus D
    IEEE Trans Signal Process; 2017 Oct; 65(20):5381-5392. PubMed ID: 31871392
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Risk-aware multi-armed bandit problem with application to portfolio selection.
    Huo X; Fu F
    R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Optimizing Infill Drilling Decisions Using Multi-Armed Bandits: Application in a Long-Term, Multi-Element Stockpile.
    Dirkx R; Dimitrakopoulos R
    Math Geosci; 2018; 50(1):35-52. PubMed ID: 31998414
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 10. The Perils of Misspecified Priors and Optional Stopping in Multi-Armed Bandits.
    Loecher M
    Front Artif Intell; 2021; 4():715690. PubMed ID: 34308342
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits.
    Li B; Yeung CH
    Chaos; 2023 Jun; 33(6):. PubMed ID: 37276557
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models.
    Elwood A; Leonardi M; Mohamed A; Rozza A
    Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832555
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges.
    Villar SS; Bowden J; Wason J
    Stat Sci; 2015; 30(2):199-215. PubMed ID: 27158186
    [No Abstract]   [Full Text] [Related]  

  • 14. A multi-objective supplier selection framework based on user-preferences.
    Toffano F; Garraffa M; Lin Y; Prestwich S; Simonis H; Wilson N
    Ann Oper Res; 2022; 308(1-2):609-640. PubMed ID: 35035013
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adaptive Sequence-Based Stimulus Selection in an ERP-Based Brain-Computer Interface by Thompson Sampling in a Multi-Armed Bandit Problem.
    Ma T; Huggins JE; Kang J
    Proceedings (IEEE Int Conf Bioinformatics Biomed); 2021 Dec; 2021():3648-3655. PubMed ID: 35692622
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

  • 17. AdaptiveBandit: A Multi-armed Bandit Framework for Adaptive Sampling in Molecular Simulations.
    Pérez A; Herrera-Nieto P; Doerr S; De Fabritiis G
    J Chem Theory Comput; 2020 Jul; 16(7):4685-4693. PubMed ID: 32539384
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 19. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Generalized Contextual Bandits With Latent Features: Algorithms and Applications.
    Xu X; Xie H; Lui JCS
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4763-4775. PubMed ID: 34780337
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.