These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

124 related articles for article (PubMed ID: 37351497)

  • 1. Signal detection models as contextual bandits.
    Sherratt TN; O'Neill E
    R Soc Open Sci; 2023 Jun; 10(6):230157. PubMed ID: 37351497
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models.
    Elwood A; Leonardi M; Mohamed A; Rozza A
    Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832555
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Overtaking method based on sand-sifter mechanism: Why do optimistic value functions find optimal solutions in multi-armed bandit problems?
    Ochi K; Kamiura M
    Biosystems; 2015 Sep; 135():55-65. PubMed ID: 26166266
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Decision-making without a brain: how an amoeboid organism solves the two-armed bandit.
    Reid CR; MacDonald H; Mann RP; Marshall JA; Latty T; Garnier S
    J R Soc Interface; 2016 Jun; 13(119):. PubMed ID: 27278359
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Mating with Multi-Armed Bandits: Reinforcement Learning Models of Human Mate Search.
    Conroy-Beam D
    Open Mind (Camb); 2024; 8():995-1011. PubMed ID: 39170796
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Predicting Ecological Momentary Assessments in an App for Tinnitus by Learning From Each User's Stream With a Contextual Multi-Armed Bandit.
    Shahania S; Unnikrishnan V; Pryss R; Kraft R; Schobel J; Hannemann R; Schlee W; Spiliopoulou M
    Front Neurosci; 2022; 16():836834. PubMed ID: 35478848
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.
    Kamiura M; Sano K
    Biosystems; 2017 Oct; 160():25-32. PubMed ID: 28838871
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Theory of choice in bandit, information sampling and foraging tasks.
    Averbeck BB
    PLoS Comput Biol; 2015 Mar; 11(3):e1004164. PubMed ID: 25815510
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Signal detection: applying analysis methods from psychology to animal behaviour.
    Sumner CJ; Sumner S
    Philos Trans R Soc Lond B Biol Sci; 2020 Jul; 375(1802):20190480. PubMed ID: 32420861
    [TBL] [Abstract][Full Text] [Related]  

  • 12. The Perils of Misspecified Priors and Optional Stopping in Multi-Armed Bandits.
    Loecher M
    Front Artif Intell; 2021; 4():715690. PubMed ID: 34308342
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Finding structure in multi-armed bandits.
    Schulz E; Franklin NT; Gershman SJ
    Cogn Psychol; 2020 Jun; 119():101261. PubMed ID: 32059133
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges.
    Villar SS; Bowden J; Wason J
    Stat Sci; 2015; 30(2):199-215. PubMed ID: 27158186
    [No Abstract]   [Full Text] [Related]  

  • 15. Foraging decisions as multi-armed bandit problems: Applying reinforcement learning algorithms to foraging data.
    Morimoto J
    J Theor Biol; 2019 Apr; 467():48-56. PubMed ID: 30735736
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Decision making for large-scale multi-armed bandit problems using bias control of chaotic temporal waveforms in semiconductor lasers.
    Morijiri K; Mihana T; Kanno K; Naruse M; Uchida A
    Sci Rep; 2022 May; 12(1):8073. PubMed ID: 35577847
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Wi-Fi Assisted Contextual Multi-Armed Bandit for Neighbor Discovery and Selection in Millimeter Wave Device to Device Communications.
    Hashima S; Hatano K; Kasban H; Mahmoud Mohamed E
    Sensors (Basel); 2021 Apr; 21(8):. PubMed ID: 33920717
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Pigeon and human performance in a multi-armed bandit task in response to changes in variable interval schedules.
    Racey D; Young ME; Garlick D; Pham JN; Blaisdell AP
    Learn Behav; 2011 Sep; 39(3):245-58. PubMed ID: 21380732
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.