These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

116 related articles for article (PubMed ID: 20467572)

  • 1. Reinforcement Learning with Limited Reinforcement: Using Bayes Risk for Active Learning in POMDPs.
    Doshi F; Pineau J; Roy N
    Proc Int Conf Mach Learn; 2008; 301():256-263. PubMed ID: 20467572
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Modeling and Planning with Macro-Actions in Decentralized POMDPs.
    Amato C; Konidaris G; Kaelbling LP; How JP
    J Artif Intell Res; 2019; 64():817-859. PubMed ID: 31656393
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Generating Reward Functions Using IRL Towards Individualized Cancer Screening.
    Petousis P; Han SX; Hsu W; Bui AAT
    Artif Intell Health (2018); 2019; 11326():213-227. PubMed ID: 31363717
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Learning State-Variable Relationships in POMCP: A Framework for Mobile Robots.
    Zuccotto M; Piccinelli M; Castellini A; Marchesini E; Farinelli A
    Front Robot AI; 2022; 9():819107. PubMed ID: 35928541
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces Under Partial Observability.
    Malekzadeh P; Plataniotis KN
    Neural Comput; 2024 Sep; 36(10):2073-2135. PubMed ID: 39177966
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Online Planning Algorithms for POMDPs.
    Ross S; Pineau J; Paquet S; Chaib-Draa B
    J Artif Intell Res; 2008 Jul; 32(2):663-704. PubMed ID: 19777080
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture.
    Ladosz P; Ben-Iwhiwhu E; Dick J; Ketz N; Kolouri S; Krichmar JL; Pilly PK; Soltoggio A
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2045-2056. PubMed ID: 34559664
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Addressing structural and observational uncertainty in resource management.
    Fackler P; Pacifici K
    J Environ Manage; 2014 Jan; 133():27-36. PubMed ID: 24355689
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Task-based decomposition of factored POMDPs.
    Shani G
    IEEE Trans Cybern; 2014 Feb; 44(2):208-16. PubMed ID: 23757544
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Partial observability and management of ecological systems.
    Williams BK; Brown ED
    Ecol Evol; 2022 Sep; 12(9):e9197. PubMed ID: 36172296
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Goal-oriented inference of environment from redundant observations.
    Takahashi K; Fukai T; Sakai Y; Takekawa T
    Neural Netw; 2024 Jun; 174():106246. PubMed ID: 38547801
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Probabilistic co-adaptive brain-computer interfacing.
    Bryan MJ; Martin SA; Cheung W; Rao RP
    J Neural Eng; 2013 Dec; 10(6):066008. PubMed ID: 24140680
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities.
    Zadenoori MA; Vicario E
    Sensors (Basel); 2022 Jun; 22(12):. PubMed ID: 35746272
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Decision making under uncertainty: a neural model based on partially observable markov decision processes.
    Rao RP
    Front Comput Neurosci; 2010; 4():146. PubMed ID: 21152255
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Improving POMDP tractability via belief compression and clustering.
    Li X; Cheung WK; Liu J
    IEEE Trans Syst Man Cybern B Cybern; 2010 Feb; 40(1):125-36. PubMed ID: 19651557
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Deep reinforcement learning navigation via decision transformer in autonomous driving.
    Ge L; Zhou X; Li Y; Wang Y
    Front Neurorobot; 2024; 18():1338189. PubMed ID: 38566892
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Intelligent Knowledge Distribution: Constrained-Action POMDPs for Resource-Aware Multiagent Communication.
    Fowler MC; Clancy TC; Williams RK
    IEEE Trans Cybern; 2022 Apr; 52(4):2004-2017. PubMed ID: 32780707
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Planning treatment of ischemic heart disease with partially observable Markov decision processes.
    Hauskrecht M; Fraser H
    Artif Intell Med; 2000 Mar; 18(3):221-44. PubMed ID: 10675716
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Model-based reinforcement learning for partially observable games with sampling-based state estimation.
    Fujita H; Ishii S
    Neural Comput; 2007 Nov; 19(11):3051-87. PubMed ID: 17883349
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Energy Efficient Execution of POMDP Policies.
    GrzeĊ› M; Poupart P; Yang X; Hoey J
    IEEE Trans Cybern; 2015 Nov; 45(11):2484-97. PubMed ID: 25532202
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.