These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 35853052)

  • 1. Adjacency Constraint for Efficient Hierarchical Reinforcement Learning.
    Zhang T; Guo S; Tan T; Hu X; Chen F
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4152-4166. PubMed ID: 35853052
    [TBL] [Abstract][Full Text] [Related]  

  • 2. End-to-End Hierarchical Reinforcement Learning With Integrated Subgoal Discovery.
    Pateria S; Subagdja B; Tan AH; Quek C
    IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7778-7790. PubMed ID: 34156954
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning.
    Zhou X; Bai T; Gao Y; Han Y
    Sensors (Basel); 2019 Apr; 19(7):. PubMed ID: 30939807
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Hierarchical approximate policy iteration with binary-tree state space decomposition.
    Xu X; Liu C; Yang SX; Hu D
    IEEE Trans Neural Netw; 2011 Dec; 22(12):1863-77. PubMed ID: 21990333
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Improvement of Reinforcement Learning With Supermodularity.
    Meng Y; Shi F; Tang L; Sun D
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5298-5309. PubMed ID: 37027690
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Cooperative modular reinforcement learning for large discrete action space problem.
    Ming F; Gao F; Liu K; Zhao C
    Neural Netw; 2023 Apr; 161():281-296. PubMed ID: 36774866
    [TBL] [Abstract][Full Text] [Related]  

  • 7. MOO-MDP: An Object-Oriented Representation for Cooperative Multiagent Reinforcement Learning.
    Da Silva FL; Glatt R; Costa AHR
    IEEE Trans Cybern; 2019 Feb; 49(2):567-579. PubMed ID: 29990289
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.
    Nowakowski K; Carvalho P; Six JB; Maillet Y; Nguyen AT; Seghiri I; M'Pemba L; Marcille T; Ngo ST; Dao TT
    Med Biol Eng Comput; 2021 Jan; 59(1):243-256. PubMed ID: 33417125
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep reinforcement learning in continuous action space for autonomous robotic surgery.
    Shahkoo AA; Abin AA
    Int J Comput Assist Radiol Surg; 2023 Mar; 18(3):423-431. PubMed ID: 36383302
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model.
    Wang B; Yan Y; Fan J
    Adv Neural Inf Process Syst; 2021 Dec; 34():16671-16685. PubMed ID: 36168331
    [TBL] [Abstract][Full Text] [Related]  

  • 11. State-Temporal Compression in Reinforcement Learning With the Reward-Restricted Geodesic Metric.
    Guo S; Yan Q; Su X; Hu X; Chen F
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):5572-5589. PubMed ID: 33764874
    [TBL] [Abstract][Full Text] [Related]  

  • 12. The Path Planning of Mobile Robot by Neural Networks and Hierarchical Reinforcement Learning.
    Yu J; Su Y; Liao Y
    Front Neurorobot; 2020; 14():63. PubMed ID: 33132890
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Hierarchical Tactile-Based Control Decomposition of Dexterous In-Hand Manipulation Tasks.
    Veiga F; Akrour R; Peters J
    Front Robot AI; 2020; 7():521448. PubMed ID: 33501302
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A neural signature of hierarchical reinforcement learning.
    Ribas-Fernandes JJ; Solway A; Diuk C; McGuire JT; Barto AG; Niv Y; Botvinick MM
    Neuron; 2011 Jul; 71(2):370-9. PubMed ID: 21791294
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Path Following Control for Underactuated Airships with Magnitude and Rate Saturation.
    Gou H; Guo X; Lou W; Ou J; Yuan J
    Sensors (Basel); 2020 Dec; 20(24):. PubMed ID: 33333882
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Toward an Adaptive Threshold on Cooperative Bandwidth Management Based on Hierarchical Reinforcement Learning.
    Mobasheri M; Kim Y; Kim W
    Sensors (Basel); 2021 Oct; 21(21):. PubMed ID: 34770360
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Forward and inverse reinforcement learning sharing network weights and hyperparameters.
    Uchibe E; Doya K
    Neural Netw; 2021 Dec; 144():138-153. PubMed ID: 34492548
    [TBL] [Abstract][Full Text] [Related]  

  • 18. An immediate-return reinforcement learning for the atypical Markov decision processes.
    Pan Z; Wen G; Tan Z; Yin S; Hu X
    Front Neurorobot; 2022; 16():1012427. PubMed ID: 36582302
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Dual Dynamic Scheduling for Hierarchical QoS in Uplink-NOMA: A Reinforcement Learning Approach.
    Li X; Cui Q; Zhai J; Huang X
    Sensors (Basel); 2021 Jun; 21(13):. PubMed ID: 34199075
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Variational Information Bottleneck Regularized Deep Reinforcement Learning for Efficient Robotic Skill Adaptation.
    Xiang G; Dian S; Du S; Lv Z
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679561
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.