These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

109 related articles for article (PubMed ID: 35431851)

  • 1. A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning.
    Xie N; Hu Y; Chen L
    Front Neurorobot; 2022; 16():817168. PubMed ID: 35431851
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Enhancing Stability and Performance in Mobile Robot Path Planning with PMR-Dueling DQN Algorithm.
    Deguale DA; Yu L; Sinishaw ML; Li K
    Sensors (Basel); 2024 Feb; 24(5):. PubMed ID: 38475059
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller.
    Li J; Chavez-Galaviz J; Azizzadenesheli K; Mahmoudian N
    Sensors (Basel); 2023 Mar; 23(7):. PubMed ID: 37050633
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Minibatch Recursive Least Squares Q-Learning.
    Zhang C; Song Q; Meng Z
    Comput Intell Neurosci; 2021; 2021():5370281. PubMed ID: 34659393
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning.
    Ohnishi S; Uchibe E; Yamaguchi Y; Nakanishi K; Yasui Y; Ishii S
    Front Neurorobot; 2019; 13():103. PubMed ID: 31920613
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning.
    Yuan R; Dou J; Li J; Wang W; Jiang Y
    Math Biosci Eng; 2023 Jan; 20(2):1903-1918. PubMed ID: 36899514
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Formation Control With Collision Avoidance Through Deep Reinforcement Learning Using Model-Guided Demonstration.
    Sui Z; Pu Z; Yi J; Wu S
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2358-2372. PubMed ID: 32673195
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning.
    Chen G; Yao S; Ma J; Pan L; Chen Y; Xu P; Ji J; Chen X
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32867080
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Multisource Transfer Double DQN Based on Actor Learning.
    Pan J; Wang X; Cheng Y; Yu Q; Jie Pan ; Xuesong Wang ; Yuhu Cheng ; Qiang Yu ; Yu Q; Cheng Y; Pan J; Wang X
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2227-2238. PubMed ID: 29771674
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Deep reinforcement learning for automated radiation adaptation in lung cancer.
    Tseng HH; Luo Y; Cui S; Chien JT; Ten Haken RK; Naqa IE
    Med Phys; 2017 Dec; 44(12):6690-6705. PubMed ID: 29034482
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Novel Reinforcement Learning Collision Avoidance Algorithm for USVs Based on Maneuvering Characteristics and COLREGs.
    Fan Y; Sun Z; Wang G
    Sensors (Basel); 2022 Mar; 22(6):. PubMed ID: 35336270
    [TBL] [Abstract][Full Text] [Related]  

  • 12. DQNViz: A Visual Analytics Approach to Understand Deep Q-Networks.
    Wang J; Gou L; Shen HW; Yang H
    IEEE Trans Vis Comput Graph; 2018 Sep; ():. PubMed ID: 30188823
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A cooperative collision-avoidance control methodology for virtual coupling trains.
    Su S; Liu W; Zhu Q; Li R; Tang T; Lv J
    Accid Anal Prev; 2022 Aug; 173():106703. PubMed ID: 35584558
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement Learning.
    Ren Z; Dong D; Li H; Chen C; Zhipeng Ren ; Daoyi Dong ; Huaxiong Li ; Chunlin Chen ; Dong D; Li H; Chen C; Ren Z
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2216-2226. PubMed ID: 29771673
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Non-Communication Decentralized Multi-Robot Collision Avoidance in Grid Map Workspace with Double Deep Q-Network.
    Chen L; Zhao Y; Zhao H; Zheng B
    Sensors (Basel); 2021 Jan; 21(3):. PubMed ID: 33513856
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Multi-Agent Reinforcement Learning Based Fully Decentralized Dynamic Time Division Configuration for 5G and B5G Network.
    Chen X; Chuai G; Gao W
    Sensors (Basel); 2022 Feb; 22(5):. PubMed ID: 35270890
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Distributed Formation Navigation of Constrained Second-Order Multiagent Systems With Collision Avoidance and Connectivity Maintenance.
    Fu J; Wen G; Yu X; Wu ZG
    IEEE Trans Cybern; 2022 Apr; 52(4):2149-2162. PubMed ID: 32628607
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Robust ASV Navigation Through Ground to Water Cross-Domain Deep Reinforcement Learning.
    Lambert R; Li J; Wu LF; Mahmoudian N
    Front Robot AI; 2021; 8():739023. PubMed ID: 34616776
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Feedback stabilization of probabilistic finite state machines based on deep Q-network.
    Tian H; Su X; Hou Y
    Front Comput Neurosci; 2024; 18():1385047. PubMed ID: 38756915
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Distributed multi-agent collision avoidance using robust differential game.
    Xue W; Zhan S; Wu Z; Chen Y; Huang J
    ISA Trans; 2023 Mar; 134():95-107. PubMed ID: 36182609
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.