These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

132 related articles for article (PubMed ID: 32673195)

  • 1. Formation Control With Collision Avoidance Through Deep Reinforcement Learning Using Model-Guided Demonstration.
    Sui Z; Pu Z; Yi J; Wu S
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2358-2372. PubMed ID: 32673195
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Design and Experimental Validation of Deep Reinforcement Learning-Based Fast Trajectory Planning and Control for Mobile Robot in Unknown Environment.
    Chai R; Niu H; Carrasco J; Arvin F; Yin H; Lennox B
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5778-5792. PubMed ID: 36215389
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning.
    Chen G; Yao S; Ma J; Pan L; Chen Y; Xu P; Ji J; Chen X
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32867080
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A Novel Reinforcement Learning Collision Avoidance Algorithm for USVs Based on Maneuvering Characteristics and COLREGs.
    Fan Y; Sun Z; Wang G
    Sensors (Basel); 2022 Mar; 22(6):. PubMed ID: 35336270
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Comparing Deep Reinforcement Learning Algorithms' Ability to Safely Navigate Challenging Waters.
    Larsen TN; Teigen HØ; Laache T; Varagnolo D; Rasheed A
    Front Robot AI; 2021; 8():738113. PubMed ID: 34589522
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Reinforcement Learning Tracking Control for Robotic Manipulator With Kernel-Based Dynamic Model.
    Hu Y; Wang W; Liu H; Liu L
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3570-3578. PubMed ID: 31689218
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning.
    Xie N; Hu Y; Chen L
    Front Neurorobot; 2022; 16():817168. PubMed ID: 35431851
    [TBL] [Abstract][Full Text] [Related]  

  • 8. End-to-End Autonomous Navigation Based on Deep Reinforcement Learning with a Survival Penalty Function.
    Jeng SL; Chiang C
    Sensors (Basel); 2023 Oct; 23(20):. PubMed ID: 37896743
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Leader-follower UAVs formation control based on a deep Q-network collaborative framework.
    Liu Z; Li J; Shen J; Wang X; Chen P
    Sci Rep; 2024 Feb; 14(1):4674. PubMed ID: 38409308
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller.
    Li J; Chavez-Galaviz J; Azizzadenesheli K; Mahmoudian N
    Sensors (Basel); 2023 Mar; 23(7):. PubMed ID: 37050633
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Distributed Formation Navigation of Constrained Second-Order Multiagent Systems With Collision Avoidance and Connectivity Maintenance.
    Fu J; Wen G; Yu X; Wu ZG
    IEEE Trans Cybern; 2022 Apr; 52(4):2149-2162. PubMed ID: 32628607
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Human skill knowledge guided global trajectory policy reinforcement learning method.
    Zang Y; Wang P; Zha F; Guo W; Li C; Sun L
    Front Neurorobot; 2024; 18():1368243. PubMed ID: 38559491
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Speed Control for Leader-Follower Robot Formation Using Fuzzy System and Supervised Machine Learning.
    Samadi Gharajeh M; Jond HB
    Sensors (Basel); 2021 May; 21(10):. PubMed ID: 34069186
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.
    Nowakowski K; Carvalho P; Six JB; Maillet Y; Nguyen AT; Seghiri I; M'Pemba L; Marcille T; Ngo ST; Dao TT
    Med Biol Eng Comput; 2021 Jan; 59(1):243-256. PubMed ID: 33417125
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Dual-Arm Robot Trajectory Planning Based on Deep Reinforcement Learning under Complex Environment.
    Tang W; Cheng C; Ai H; Chen L
    Micromachines (Basel); 2022 Mar; 13(4):. PubMed ID: 35457870
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Prescribed-time containment control of multi-agent systems subject to collision avoidance and connectivity maintenance.
    Tang C; Ji L; Yang S; Guo X; Li H
    ISA Trans; 2024 May; 148():156-168. PubMed ID: 38458906
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A cooperative collision-avoidance control methodology for virtual coupling trains.
    Su S; Liu W; Zhu Q; Li R; Tang T; Lv J
    Accid Anal Prev; 2022 Aug; 173():106703. PubMed ID: 35584558
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Simultaneous Obstacle Avoidance and Target Tracking of Multiple Wheeled Mobile Robots With Certified Safety.
    Li X; Xu Z; Li S; Su Z; Zhou X
    IEEE Trans Cybern; 2022 Nov; 52(11):11859-11873. PubMed ID: 33961580
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Leader-Follower Bipartite Output Synchronization on Signed Digraphs Under Adversarial Factors via Data-Based Reinforcement Learning.
    Li Q; Xia L; Song R; Liu J
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4185-4195. PubMed ID: 31831451
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Kernel-based least squares policy iteration for reinforcement learning.
    Xu X; Hu D; Lu X
    IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.