These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
116 related articles for article (PubMed ID: 20467572)
21. Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics. Kwon M; Daptardar S; Schrater P; Pitkow X Adv Neural Inf Process Syst; 2020 Dec; 33():7898-7909. PubMed ID: 34712038 [TBL] [Abstract][Full Text] [Related]
22. Modeling treatment of ischemic heart disease with partially observable Markov decision processes. Hauskrecht M; Fraser H Proc AMIA Symp; 1998; ():538-42. PubMed ID: 9929277 [TBL] [Abstract][Full Text] [Related]
23. An algorithm to create model file for Partially Observable Markov Decision Process for mobile robot path planning. Deshpande SV; Harikrishnan R; Sampe J; Patwa A MethodsX; 2024 Jun; 12():102552. PubMed ID: 38299041 [TBL] [Abstract][Full Text] [Related]
24. Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark. Loisy A; Heinonen RA Eur Phys J E Soft Matter; 2023 Mar; 46(3):17. PubMed ID: 36939979 [TBL] [Abstract][Full Text] [Related]
25. Detecting Changes and Avoiding Catastrophic Forgetting in Dynamic Partially Observable Environments. Dick J; Ladosz P; Ben-Iwhiwhu E; Shimadzu H; Kinnell P; Pilly PK; Kolouri S; Soltoggio A Front Neurorobot; 2020; 14():578675. PubMed ID: 33424575 [TBL] [Abstract][Full Text] [Related]
26. Sorting Objects from a Conveyor Belt Using POMDPs with Multiple-Object Observations and Information-Gain Rewards. Mezei AD; Tamás L; Buşoniu L Sensors (Basel); 2020 Apr; 20(9):. PubMed ID: 32349393 [TBL] [Abstract][Full Text] [Related]
28. UAV Autonomous Tracking and Landing Based on Deep Reinforcement Learning Strategy. Xie J; Peng X; Wang H; Niu W; Zheng X Sensors (Basel); 2020 Oct; 20(19):. PubMed ID: 33019747 [TBL] [Abstract][Full Text] [Related]
29. Bayesian reinforcement learning for navigation planning in unknown environments. Alali M; Imani M Front Artif Intell; 2024; 7():1308031. PubMed ID: 39026967 [TBL] [Abstract][Full Text] [Related]
30. Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP. Tottori T; Kobayashi TJ Entropy (Basel); 2021 Apr; 23(5):. PubMed ID: 33947054 [TBL] [Abstract][Full Text] [Related]
31. Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments. Nguyen HS; Cruz F; Dazeley R Sensors (Basel); 2023 Mar; 23(5):. PubMed ID: 36904885 [TBL] [Abstract][Full Text] [Related]
32. Which states matter? An application of an intelligent discretization method to solve a continuous POMDP in conservation biology. Nicol S; Chadès I PLoS One; 2012; 7(2):e28993. PubMed ID: 22363398 [TBL] [Abstract][Full Text] [Related]
33. Quantifying Reinforcement-Learning Agent's Autonomy, Reliance on Memory and Internalisation of the Environment. Ingel A; Makkeh A; Corcoll O; Vicente R Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327912 [TBL] [Abstract][Full Text] [Related]
34. False Data-Injection Attack Detection in Cyber-Physical Systems With Unknown Parameters: A Deep Reinforcement Learning Approach. Liu K; Zhang H; Zhang Y; Sun C IEEE Trans Cybern; 2023 Nov; 53(11):7115-7125. PubMed ID: 37015355 [TBL] [Abstract][Full Text] [Related]