These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

107 related articles for article (PubMed ID: 38319761)

  • 1. Shielded Planning Guided Data-Efficient and Safe Reinforcement Learning.
    Wang H; Qin J; Kan Z
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; PP():. PubMed ID: 38319761
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Data-Driven Safe Policy Optimization for Black-Box Dynamical Systems With Temporal Logic Specifications.
    Zhang C; Lin S; Wang H; Chen Z; Wang S; Kan Z
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38109255
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Exploration With Task Information for Meta Reinforcement Learning.
    Jiang P; Song S; Huang G
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4033-4046. PubMed ID: 34739382
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer.
    Guo H; Song M; Ding Z; Yi C; Jiang F
    Sensors (Basel); 2023 Jan; 23(1):. PubMed ID: 36617113
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Graph-Attention-Based Casual Discovery With Trust Region-Navigated Clipping Policy Optimization.
    Liu S; Feng Y; Wu K; Cheng G; Huang J; Liu Z
    IEEE Trans Cybern; 2023 Apr; 53(4):2311-2324. PubMed ID: 34665751
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Predictive hierarchical reinforcement learning for path-efficient mapless navigation with moving target.
    Li H; Luo B; Song W; Yang C
    Neural Netw; 2023 Aug; 165():677-688. PubMed ID: 37385022
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics.
    Berkenkamp F; Krause A; Schoellig AP
    Mach Learn; 2023; 112(10):3713-3747. PubMed ID: 37692295
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning.
    Luu TM; Vu T; Nguyen T; Yoo CD
    Sensors (Basel); 2022 Aug; 22(17):. PubMed ID: 36080961
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty.
    Kim M; Kim JS; Choi MS; Park JH
    Sensors (Basel); 2022 Sep; 22(19):. PubMed ID: 36236366
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Implicit Posteriori Parameter Distribution Optimization in Reinforcement Learning.
    Li T; Yang G; Chu J
    IEEE Trans Cybern; 2024 May; 54(5):3051-3064. PubMed ID: 37030741
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Deep Reinforcement Learning for Indoor Mobile Robot Path Planning.
    Gao J; Ye W; Guo J; Li Z
    Sensors (Basel); 2020 Sep; 20(19):. PubMed ID: 32992750
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Learning-Based End-to-End Path Planning for Lunar Rovers with Safety Constraints.
    Yu X; Wang P; Zhang Z
    Sensors (Basel); 2021 Jan; 21(3):. PubMed ID: 33504073
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Reinforcement Learning Control of Robotic Knee With Human-in-the-Loop by Flexible Policy Iteration.
    Gao X; Si J; Wen Y; Li M; Huang H
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5873-5887. PubMed ID: 33956634
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Kernel-based least squares policy iteration for reinforcement learning.
    Xu X; Hu D; Lu X
    IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Toward Energy-Efficient Routing of Multiple AGVs with Multi-Agent Reinforcement Learning.
    Ye X; Deng Z; Shi Y; Shen W
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420781
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Informative Trajectory Planning Using Reinforcement Learning for Minimum-Time Exploration of Spatiotemporal Fields.
    Li Z; You K; Sun J; Wang G
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37581975
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Sample Efficient Deep Reinforcement Learning With Online State Abstraction and Causal Transformer Model Prediction.
    Lan Y; Xu X; Fang Q; Hao J
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37581972
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A formal methods approach to interpretable reinforcement learning for robotic planning.
    Li X; Serlin Z; Yang G; Belta C
    Sci Robot; 2019 Dec; 4(37):. PubMed ID: 33137718
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Safe Reinforcement Learning via a Model-Free Safety Certifier.
    Modares A; Sadati N; Esmaeili B; Yaghmaie FA; Modares H
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3302-3311. PubMed ID: 37053065
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Novel Learning-Based Trajectory Generation Strategy for a Quadrotor.
    Hua H; Fang Y
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):9068-9079. PubMed ID: 36346868
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.