These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 36013456)

  • 1. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 2. The future of Cochrane Neonatal.
    Soll RF; Ovelman C; McGuire W
    Early Hum Dev; 2020 Nov; 150():105191. PubMed ID: 33036834
    [TBL] [Abstract][Full Text] [Related]  

  • 3. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A Multiplier Bootstrap Approach to Designing Robust Algorithms for Contextual Bandits.
    Xie H; Tang Q; Zhu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9887-9899. PubMed ID: 35385392
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Effectiveness and cost-effectiveness of four different strategies for SARS-CoV-2 surveillance in the general population (CoV-Surv Study): a structured summary of a study protocol for a cluster-randomised, two-factorial controlled trial.
    Deckert A; Anders S; de Allegri M; Nguyen HT; Souares A; McMahon S; Boerner K; Meurer M; Herbst K; Sand M; Koeppel L; Siems T; Brugnara L; Brenner S; Burk R; Lou D; Kirrmaier D; Duan Y; Ovchinnikova S; Marx M; Kräusslich HG; Knop M; Bärnighausen T; Denkinger C
    Trials; 2021 Jan; 22(1):39. PubMed ID: 33419461
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Safety and Efficacy of Imatinib for Hospitalized Adults with COVID-19: A structured summary of a study protocol for a randomised controlled trial.
    Emadi A; Chua JV; Talwani R; Bentzen SM; Baddley J
    Trials; 2020 Oct; 21(1):897. PubMed ID: 33115543
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges.
    Villar SS; Bowden J; Wason J
    Stat Sci; 2015; 30(2):199-215. PubMed ID: 27158186
    [No Abstract]   [Full Text] [Related]  

  • 9. Adaptive designs for best treatment identification with top-two Thompson sampling and acceleration.
    Wang J; Tiwari R
    Pharm Stat; 2023; 22(6):1089-1103. PubMed ID: 37571869
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Polynomial-Time Algorithms for Multiple-Arm Identification with Full-Bandit Feedback.
    Kuroki Y; Xu L; Miyauchi A; Honda J; Sugiyama M
    Neural Comput; 2020 Sep; 32(9):1733-1773. PubMed ID: 32687769
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Mating with Multi-Armed Bandits: Reinforcement Learning Models of Human Mate Search.
    Conroy-Beam D
    Open Mind (Camb); 2024; 8():995-1011. PubMed ID: 39170796
    [TBL] [Abstract][Full Text] [Related]  

  • 13. An Optimal Algorithm for the Stochastic Bandits While Knowing the Near-Optimal Mean Reward.
    Yang S; Gao Y
    IEEE Trans Neural Netw Learn Syst; 2021 May; 32(5):2285-2291. PubMed ID: 32479408
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Putting bandits into context: How function learning supports decision making.
    Schulz E; Konstantinidis E; Speekenbrink M
    J Exp Psychol Learn Mem Cogn; 2018 Jun; 44(6):927-943. PubMed ID: 29130693
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Inference for Batched Bandits.
    Zhang KW; Janson L; Murphy SA
    Adv Neural Inf Process Syst; 2020 Dec; 33():9818-9829. PubMed ID: 35002190
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Bandit strategies evaluated in the context of clinical trials in rare life-threatening diseases.
    Villar SS
    Probab Eng Inf Sci; 2018 Apr; 32(2):229-245. PubMed ID: 29520124
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures.
    Mohaghegh Neyshabouri M; Gokcesu K; Gokcesu H; Ozkan H; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2019 Mar; 30(3):923-937. PubMed ID: 30072350
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits.
    Ramesh A; Rauber P; Conserva M; Schmidhuber J
    Neural Comput; 2022 Oct; 34(11):2232-2272. PubMed ID: 36112923
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.