These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

116 related articles for article (PubMed ID: 20467572)

  • 21. Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics.
    Kwon M; Daptardar S; Schrater P; Pitkow X
    Adv Neural Inf Process Syst; 2020 Dec; 33():7898-7909. PubMed ID: 34712038
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.
    Hauskrecht M; Fraser H
    Proc AMIA Symp; 1998; ():538-42. PubMed ID: 9929277
    [TBL] [Abstract][Full Text] [Related]  

  • 23. An algorithm to create model file for Partially Observable Markov Decision Process for mobile robot path planning.
    Deshpande SV; Harikrishnan R; Sampe J; Patwa A
    MethodsX; 2024 Jun; 12():102552. PubMed ID: 38299041
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark.
    Loisy A; Heinonen RA
    Eur Phys J E Soft Matter; 2023 Mar; 46(3):17. PubMed ID: 36939979
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Detecting Changes and Avoiding Catastrophic Forgetting in Dynamic Partially Observable Environments.
    Dick J; Ladosz P; Ben-Iwhiwhu E; Shimadzu H; Kinnell P; Pilly PK; Kolouri S; Soltoggio A
    Front Neurorobot; 2020; 14():578675. PubMed ID: 33424575
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Sorting Objects from a Conveyor Belt Using POMDPs with Multiple-Object Observations and Information-Gain Rewards.
    Mezei AD; Tamás L; Buşoniu L
    Sensors (Basel); 2020 Apr; 20(9):. PubMed ID: 32349393
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Faster Teaching via POMDP Planning.
    Rafferty AN; Brunskill E; Griffiths TL; Shafto P
    Cogn Sci; 2016 Aug; 40(6):1290-332. PubMed ID: 26400190
    [TBL] [Abstract][Full Text] [Related]  

  • 28. UAV Autonomous Tracking and Landing Based on Deep Reinforcement Learning Strategy.
    Xie J; Peng X; Wang H; Niu W; Zheng X
    Sensors (Basel); 2020 Oct; 20(19):. PubMed ID: 33019747
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Bayesian reinforcement learning for navigation planning in unknown environments.
    Alali M; Imani M
    Front Artif Intell; 2024; 7():1308031. PubMed ID: 39026967
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP.
    Tottori T; Kobayashi TJ
    Entropy (Basel); 2021 Apr; 23(5):. PubMed ID: 33947054
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments.
    Nguyen HS; Cruz F; Dazeley R
    Sensors (Basel); 2023 Mar; 23(5):. PubMed ID: 36904885
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Which states matter? An application of an intelligent discretization method to solve a continuous POMDP in conservation biology.
    Nicol S; Chadès I
    PLoS One; 2012; 7(2):e28993. PubMed ID: 22363398
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Quantifying Reinforcement-Learning Agent's Autonomy, Reliance on Memory and Internalisation of the Environment.
    Ingel A; Makkeh A; Corcoll O; Vicente R
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327912
    [TBL] [Abstract][Full Text] [Related]  

  • 34. False Data-Injection Attack Detection in Cyber-Physical Systems With Unknown Parameters: A Deep Reinforcement Learning Approach.
    Liu K; Zhang H; Zhang Y; Sun C
    IEEE Trans Cybern; 2023 Nov; 53(11):7115-7125. PubMed ID: 37015355
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Employing decomposable partially observable Markov decision processes to control gene regulatory networks.
    Erdogdu U; Polat F; Alhajj R
    Artif Intell Med; 2017 Nov; 83():14-34. PubMed ID: 28733120
    [TBL] [Abstract][Full Text] [Related]  

  • 36. A Framework for Multi-Agent UAV Exploration and Target-Finding in GPS-Denied and Partially Observable Environments.
    Walker O; Vanegas F; Gonzalez F
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32839390
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Intrinsic Rewards for Maintenance, Approach, Avoidance, and Achievement Goal Types.
    Dhakan P; Merrick K; Rañó I; Siddique N
    Front Neurorobot; 2018; 12():63. PubMed ID: 30356820
    [TBL] [Abstract][Full Text] [Related]  

  • 38. What's a good prediction? Challenges in evaluating an agent's knowledge.
    Kearney A; Koop AJ; Pilarski PM
    Adapt Behav; 2023 Jun; 31(3):197-212. PubMed ID: 37284424
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Value-directed human behavior analysis from video using partially observable Markov decision processes.
    Hoey J; Little JJ
    IEEE Trans Pattern Anal Mach Intell; 2007 Jul; 29(7):1118-32. PubMed ID: 17496372
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Theoretical Analysis of Heuristic Search Methods for Online POMDPs.
    Ross S; Pineau J; Chaib-Draa B
    Adv Neural Inf Process Syst; 2008; 20():1216-1225. PubMed ID: 21625296
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.