These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

168 related articles for article (PubMed ID: 35746272)

  • 1. Learning Dynamics and Control of a Stochastic System under Limited Sensing Capabilities.
    Zadenoori MA; Vicario E
    Sensors (Basel); 2022 Jun; 22(12):. PubMed ID: 35746272
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Goal-oriented inference of environment from redundant observations.
    Takahashi K; Fukai T; Sakai Y; Takekawa T
    Neural Netw; 2024 Jun; 174():106246. PubMed ID: 38547801
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Active Inference and Reinforcement Learning: A Unified Inference on Continuous State and Action Spaces Under Partial Observability.
    Malekzadeh P; Plataniotis KN
    Neural Comput; 2024 Sep; 36(10):2073-2135. PubMed ID: 39177966
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Optimal management of stochastic invasion in a metapopulation with Allee effects.
    Mallela A; Hastings A
    J Theor Biol; 2022 Sep; 549():111221. PubMed ID: 35843441
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Hidden Markov model approach to skill learning and its application to telerobotics.
    Yang J; Xu Y; Chen CS
    IEEE Trans Rob Autom; 1994 Oct; 10(5):621-31. PubMed ID: 11539290
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Addressing structural and observational uncertainty in resource management.
    Fackler P; Pacifici K
    J Environ Manage; 2014 Jan; 133():27-36. PubMed ID: 24355689
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Reinforcement Learning with Limited Reinforcement: Using Bayes Risk for Active Learning in POMDPs.
    Doshi F; Pineau J; Roy N
    Proc Int Conf Mach Learn; 2008; 301():256-263. PubMed ID: 20467572
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Decision making under uncertainty: a neural model based on partially observable markov decision processes.
    Rao RP
    Front Comput Neurosci; 2010; 4():146. PubMed ID: 21152255
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Probabilistic co-adaptive brain-computer interfacing.
    Bryan MJ; Martin SA; Cheung W; Rao RP
    J Neural Eng; 2013 Dec; 10(6):066008. PubMed ID: 24140680
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Faster Teaching via POMDP Planning.
    Rafferty AN; Brunskill E; Griffiths TL; Shafto P
    Cogn Sci; 2016 Aug; 40(6):1290-332. PubMed ID: 26400190
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.
    Lewis FL; Vamvoudakis KG
    IEEE Trans Syst Man Cybern B Cybern; 2011 Feb; 41(1):14-25. PubMed ID: 20350860
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Active inference and agency: optimal control without cost functions.
    Friston K; Samothrakis S; Montague R
    Biol Cybern; 2012 Oct; 106(8-9):523-41. PubMed ID: 22864468
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Employing decomposable partially observable Markov decision processes to control gene regulatory networks.
    Erdogdu U; Polat F; Alhajj R
    Artif Intell Med; 2017 Nov; 83():14-34. PubMed ID: 28733120
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Reward optimization in the primate brain: a probabilistic model of decision making under uncertainty.
    Huang Y; Rao RP
    PLoS One; 2013; 8(1):e53344. PubMed ID: 23349707
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A control-theoretic system identification framework and a real-time closed-loop clinical simulation testbed for electrical brain stimulation.
    Yang Y; Connolly AT; Shanechi MM
    J Neural Eng; 2018 Dec; 15(6):066007. PubMed ID: 30221624
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation.
    Ornik M; Topcu U
    J Mach Learn Res; 2021; 22():1-40. PubMed ID: 35002545
    [TBL] [Abstract][Full Text] [Related]  

  • 17. From data to optimal decision making: a data-driven, probabilistic machine learning approach to decision support for patients with sepsis.
    Tsoukalas A; Albertson T; Tagkopoulos I
    JMIR Med Inform; 2015 Feb; 3(1):e11. PubMed ID: 25710907
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Value-directed human behavior analysis from video using partially observable Markov decision processes.
    Hoey J; Little JJ
    IEEE Trans Pattern Anal Mach Intell; 2007 Jul; 29(7):1118-32. PubMed ID: 17496372
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Bayesian restoration of a hidden Markov chain with applications to DNA sequencing.
    Churchill GA; Lazareva B
    J Comput Biol; 1999; 6(2):261-77. PubMed ID: 10421527
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Uncertainty maximization in partially observable domains: A cognitive perspective.
    Ramicic M; Bonarini A
    Neural Netw; 2023 May; 162():456-471. PubMed ID: 36965275
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.