These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

155 related articles for article (PubMed ID: 29994080)

  • 21. An empirical evaluation of active inference in multi-armed bandits.
    Marković D; Stojić H; Schwöbel S; Kiebel SJ
    Neural Netw; 2021 Dec; 144():229-246. PubMed ID: 34507043
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Master-Slave Deep Architecture for Top- K Multiarmed Bandits With Nonlinear Bandit Feedback and Diversity Constraints.
    Huang H; Shen L; Ye D; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37999964
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Non Stationary Multi-Armed Bandit: Empirical Evaluation of a New Concept Drift-Aware Algorithm.
    Cavenaghi E; Sottocornola G; Stella F; Zanker M
    Entropy (Basel); 2021 Mar; 23(3):. PubMed ID: 33807028
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Approximate information for efficient exploration-exploitation strategies.
    Barbier-Chebbah A; Vestergaard CL; Masson JB
    Phys Rev E; 2024 May; 109(5):L052105. PubMed ID: 38907409
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Two-Stage Multiarmed Bandit for Reconfigurable Intelligent Surface Aided Millimeter Wave Communications.
    Mohamed EM; Hashima S; Hatano K; Aldossari SA
    Sensors (Basel); 2022 Mar; 22(6):. PubMed ID: 35336350
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Risk-aware multi-armed bandit problem with application to portfolio selection.
    Huo X; Fu F
    R Soc Open Sci; 2017 Nov; 4(11):171377. PubMed ID: 29291122
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data.
    Chen X; Lee KM; Villar SS; Robertson DS
    PLoS One; 2022; 17(9):e0274272. PubMed ID: 36094920
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Massive Maritime Path Planning: A Contextual Online Learning Approach.
    Zhou P; Zhao W; Li J; Li A; Du W; Wen S
    IEEE Trans Cybern; 2021 Dec; 51(12):6262-6273. PubMed ID: 32112685
    [TBL] [Abstract][Full Text] [Related]  

  • 29. A Contextual-Bandit-Based Approach for Informed Decision-Making in Clinical Trials.
    Varatharajah Y; Berry B
    Life (Basel); 2022 Aug; 12(8):. PubMed ID: 36013456
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Minimax partial distortion competitive learning for optimal codebook design.
    Zhu C; Po LM
    IEEE Trans Image Process; 1998; 7(10):1400-9. PubMed ID: 18276207
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Covariance Matrix Adaptation for Multiobjective Multiarmed Bandits.
    Drugan MM
    IEEE Trans Neural Netw Learn Syst; 2019 Aug; 30(8):2493-2502. PubMed ID: 30602427
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Smoking and the bandit: a preliminary study of smoker and nonsmoker differences in exploratory behavior measured with a multiarmed bandit task.
    Addicott MA; Pearson JM; Wilson J; Platt ML; McClernon FJ
    Exp Clin Psychopharmacol; 2013 Feb; 21(1):66-73. PubMed ID: 23245198
    [TBL] [Abstract][Full Text] [Related]  

  • 33. A Multiarmed Bandit Approach for LTE-U/Wi-Fi Coexistence in a Multicell Scenario.
    Diógenes do Rego I; de Castro Neto JM; Neto SFG; de Santana PM; de Sousa VA; Vieira D; Venâncio Neto A
    Sensors (Basel); 2023 Jul; 23(15):. PubMed ID: 37571502
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Automatic motor task selection via a bandit algorithm for a brain-controlled button.
    Fruitet J; Carpentier A; Munos R; Clerc M
    J Neural Eng; 2013 Feb; 10(1):016012. PubMed ID: 23337361
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Per-Round Knapsack-Constrained Linear Submodular Bandits.
    Yu B; Fang M; Tao D
    Neural Comput; 2016 Dec; 28(12):2757-2789. PubMed ID: 27626968
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.
    Aono M; Kim SJ; Hara M; Munakata T
    Biosystems; 2014 Mar; 117():1-9. PubMed ID: 24384066
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Achieving Online Regression Performance of LSTMs With Simple RNNs.
    Vural NM; Ilhan F; Yilmaz SF; Ergut S; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7632-7643. PubMed ID: 34138720
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Guaranteed satisficing and finite regret: Analysis of a cognitive satisficing value function.
    Tamatsukuri A; Takahashi T
    Biosystems; 2019 Jun; 180():46-53. PubMed ID: 30822443
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Simple artificial neural networks that match probability and exploit and explore when confronting a multiarmed bandit.
    Dawson MR; Dupuis B; Spetch ML; Kelly DM
    IEEE Trans Neural Netw; 2009 Aug; 20(8):1368-71. PubMed ID: 19596631
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Distributed Online Stochastic-Constrained Convex Optimization With Bandit Feedback.
    Wang C; Xu S; Yuan D
    IEEE Trans Cybern; 2024 Jan; 54(1):63-75. PubMed ID: 35724296
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.