These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

181 related articles for article (PubMed ID: 34375280)

  • 1. Continuous Action Reinforcement Learning From a Mixture of Interpretable Experts.
    Akrour R; Tateo D; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6795-6806. PubMed ID: 34375280
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Continuous action deep reinforcement learning for propofol dosing during general anesthesia.
    Schamberg G; Badgeley M; Meschede-Krasa B; Kwon O; Brown EN
    Artif Intell Med; 2022 Jan; 123():102227. PubMed ID: 34998516
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Achieving efficient interpretability of reinforcement learning via policy distillation and selective input gradient regularization.
    Xing J; Nagata T; Zou X; Neftci E; Krichmar JL
    Neural Netw; 2023 Apr; 161():228-241. PubMed ID: 36774862
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Self-Supervised Discovering of Interpretable Features for Reinforcement Learning.
    Shi W; Huang G; Song S; Wang Z; Lin T; Wu C
    IEEE Trans Pattern Anal Mach Intell; 2022 May; 44(5):2712-2724. PubMed ID: 33186101
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Kernel-based least squares policy iteration for reinforcement learning.
    Xu X; Hu D; Lu X
    IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Reinforcement learning in continuous time and space: interference and not ill conditioning is the main problem when using distributed function approximators.
    Baddeley B
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):950-6. PubMed ID: 18632383
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Inference-Based Posteriori Parameter Distribution Optimization.
    Wang X; Li T; Cheng Y; Chen CLP
    IEEE Trans Cybern; 2022 May; 52(5):3006-3017. PubMed ID: 33027029
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Efficient Reinforcement Learning from Demonstration via Bayesian Network-Based Knowledge Extraction.
    Zhang Y; Lan Y; Fang Q; Xu X; Li J; Zeng Y
    Comput Intell Neurosci; 2021; 2021():7588221. PubMed ID: 34603434
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep reinforcement learning and its applications in medical imaging and radiation therapy: a survey.
    Xu L; Zhu S; Wen N
    Phys Med Biol; 2022 Nov; 67(22):. PubMed ID: 36270582
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Context meta-reinforcement learning via neuromodulation.
    Ben-Iwhiwhu E; Dick J; Ketz NA; Pilly PK; Soltoggio A
    Neural Netw; 2022 Aug; 152():70-79. PubMed ID: 35512540
    [TBL] [Abstract][Full Text] [Related]  

  • 11. MoËT: Mixture of Expert Trees and its application to verifiable reinforcement learning.
    Vasić M; Petrović A; Wang K; Nikolić M; Singh R; Khurshid S
    Neural Netw; 2022 Jul; 151():34-47. PubMed ID: 35381441
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Selective particle attention: Rapidly and flexibly selecting features for deep reinforcement learning.
    Blakeman S; Mareschal D
    Neural Netw; 2022 Jun; 150():408-421. PubMed ID: 35358888
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Distributional generative adversarial imitation learning with reproducing kernel generalization.
    Zhou Y; Lu M; Liu X; Che Z; Xu Z; Tang J; Zhang Y; Peng Y; Peng Y
    Neural Netw; 2023 Aug; 165():43-59. PubMed ID: 37276810
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Human-level control through deep reinforcement learning.
    Mnih V; Kavukcuoglu K; Silver D; Rusu AA; Veness J; Bellemare MG; Graves A; Riedmiller M; Fidjeland AK; Ostrovski G; Petersen S; Beattie C; Sadik A; Antonoglou I; King H; Kumaran D; Wierstra D; Legg S; Hassabis D
    Nature; 2015 Feb; 518(7540):529-33. PubMed ID: 25719670
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adaptive Quadruped Balance Control for Dynamic Environments Using Maximum-Entropy Reinforcement Learning.
    Sun H; Fu T; Ling Y; He C
    Sensors (Basel); 2021 Sep; 21(17):. PubMed ID: 34502796
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Conformer-RL: A deep reinforcement learning library for conformer generation.
    Jiang R; Gogineni T; Kammeraad J; He Y; Tewari A; Zimmerman PM
    J Comput Chem; 2022 Oct; 43(27):1880-1886. PubMed ID: 36000759
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Hierarchical approximate policy iteration with binary-tree state space decomposition.
    Xu X; Liu C; Yang SX; Hu D
    IEEE Trans Neural Netw; 2011 Dec; 22(12):1863-77. PubMed ID: 21990333
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Model Selection for Offline Reinforcement Learning: Practical Considerations for Healthcare Settings.
    Tang S; Wiens J
    Proc Mach Learn Res; 2021 Aug; 149():2-35. PubMed ID: 35702420
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Multimodal information bottleneck for deep reinforcement learning with multiple sensors.
    You B; Liu H
    Neural Netw; 2024 Aug; 176():106347. PubMed ID: 38688069
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Transatlantic transferability of a new reinforcement learning model for optimizing haemodynamic treatment for critically ill patients with sepsis.
    Roggeveen L; El Hassouni A; Ahrendt J; Guo T; Fleuren L; Thoral P; Girbes AR; Hoogendoorn M; Elbers PW
    Artif Intell Med; 2021 Feb; 112():102003. PubMed ID: 33581824
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.