These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 35702420)

  • 1. Model Selection for Offline Reinforcement Learning: Practical Considerations for Healthcare Settings.
    Tang S; Wiens J
    Proc Mach Learn Res; 2021 Aug; 149():2-35. PubMed ID: 35702420
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Reinforcement learning for intensive care medicine: actionable clinical insights from novel approaches to reward shaping and off-policy model evaluation.
    Roggeveen LF; Hassouni AE; de Grooth HJ; Girbes ARJ; Hoogendoorn M; Elbers PWG;
    Intensive Care Med Exp; 2024 Mar; 12(1):32. PubMed ID: 38526681
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Efficient Offline Reinforcement Learning With Relaxed Conservatism.
    Huang L; Dong B; Zhang W
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5260-5272. PubMed ID: 38345962
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Continuous Action Reinforcement Learning From a Mixture of Interpretable Experts.
    Akrour R; Tateo D; Peters J
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6795-6806. PubMed ID: 34375280
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Offline Reinforcement Learning With Behavior Value Regularization.
    Huang L; Dong B; Xie W; Zhang W
    IEEE Trans Cybern; 2024 Jun; 54(6):3692-3704. PubMed ID: 38669164
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Is Deep Reinforcement Learning Ready for Practical Applications in Healthcare? A Sensitivity Analysis of Duel-DDQN for Hemodynamic Management in Sepsis Patients.
    Lu M; Shahn Z; Sow D; Doshi-Velez F; Lehman LH
    AMIA Annu Symp Proc; 2020; 2020():773-782. PubMed ID: 33936452
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Transatlantic transferability of a new reinforcement learning model for optimizing haemodynamic treatment for critically ill patients with sepsis.
    Roggeveen L; El Hassouni A; Ahrendt J; Guo T; Fleuren L; Thoral P; Girbes AR; Hoogendoorn M; Elbers PW
    Artif Intell Med; 2021 Feb; 112():102003. PubMed ID: 33581824
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems.
    Prudencio RF; Maximo MROA; Colombini EL
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; PP():. PubMed ID: 37030754
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning Approach to Critical Care.
    Shirali A; Schubert A; Alaa A
    IEEE J Biomed Health Inform; 2024 Jun; PP():. PubMed ID: 38885106
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Offline Learning of Closed-Loop Deep Brain Stimulation Controllers for Parkinson Disease Treatment.
    Gao Q; Schimdt SL; Chowdhury A; Feng G; Peters JJ; Genty K; Grill WM; Turner DA; Pajic M
    ArXiv; 2023 Mar; ():. PubMed ID: 36798453
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Continuous action deep reinforcement learning for propofol dosing during general anesthesia.
    Schamberg G; Badgeley M; Meschede-Krasa B; Kwon O; Brown EN
    Artif Intell Med; 2022 Jan; 123():102227. PubMed ID: 34998516
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Offline Model-Based Adaptable Policy Learning for Decision-Making in Out-of-Support Regions.
    Chen XH; Luo FM; Yu Y; Li Q; Qin Z; Shang W; Ye J
    IEEE Trans Pattern Anal Mach Intell; 2023 Dec; 45(12):15260-15274. PubMed ID: 37725727
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Representation learning for continuous action spaces is beneficial for efficient policy learning.
    Zhao T; Wang Y; Sun W; Chen Y; Niu G; Sugiyama M
    Neural Netw; 2023 Feb; 159():137-152. PubMed ID: 36566604
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Trajectory Inspection: A Method for Iterative Clinician-Driven Design of Reinforcement Learning Studies.
    Ji CX; Oberst M; Kanjilal S; Sontag D
    AMIA Jt Summits Transl Sci Proc; 2021; 2021():305-314. PubMed ID: 34457145
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Improvement of Reinforcement Learning With Supermodularity.
    Meng Y; Shi F; Tang L; Sun D
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5298-5309. PubMed ID: 37027690
    [TBL] [Abstract][Full Text] [Related]  

  • 16. End-to-end offline reinforcement learning for glycemia control.
    Beolet T; Adenis A; Huneker E; Louis M
    Artif Intell Med; 2024 Jun; 154():102920. PubMed ID: 38972092
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Kernel-based least squares policy iteration for reinforcement learning.
    Xu X; Hu D; Lu X
    IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Mild Policy Evaluation for Offline Actor-Critic.
    Huang L; Dong B; Lu J; Zhang W
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; PP():. PubMed ID: 37676802
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation.
    Zhang T; Wang X; Liang B; Yuan B
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; 34(12):9925-9939. PubMed ID: 35439142
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Continuous-Time Decision Transformer for Healthcare Applications.
    Zhang Z; Mei H; Xu Y
    Proc Mach Learn Res; 2023 Apr; 206():6245-6262. PubMed ID: 38435084
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.