These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
237 related articles for article (PubMed ID: 32554331)
21. Cooperative Differential Game-Based Distributed Optimal Synchronization Control of Heterogeneous Nonlinear Multiagent Systems. Sun J; Ming Z IEEE Trans Cybern; 2023 Dec; 53(12):7933-7942. PubMed ID: 37022861 [TBL] [Abstract][Full Text] [Related]
22. Inverse reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units. Yu C; Liu J; Zhao H BMC Med Inform Decis Mak; 2019 Apr; 19(Suppl 2):57. PubMed ID: 30961594 [TBL] [Abstract][Full Text] [Related]
23. Cooperative multiagent congestion control for high-speed networks. Hwang KS; Tan SW; Hsiao MC; Wu CS IEEE Trans Syst Man Cybern B Cybern; 2005 Apr; 35(2):255-68. PubMed ID: 15828654 [TBL] [Abstract][Full Text] [Related]
24. Optimal Tracking Control of Nonlinear Multiagent Systems Using Internal Reinforce Q-Learning. Peng Z; Luo R; Hu J; Shi K; Nguang SK; Ghosh BK IEEE Trans Neural Netw Learn Syst; 2022 Aug; 33(8):4043-4055. PubMed ID: 33587710 [TBL] [Abstract][Full Text] [Related]
25. MOO-MDP: An Object-Oriented Representation for Cooperative Multiagent Reinforcement Learning. Da Silva FL; Glatt R; Costa AHR IEEE Trans Cybern; 2019 Feb; 49(2):567-579. PubMed ID: 29990289 [TBL] [Abstract][Full Text] [Related]
26. Multiexperience-Assisted Efficient Multiagent Reinforcement Learning. Zhang T; Liu Z; Yi J; Wu S; Pu Z; Zhao Y IEEE Trans Neural Netw Learn Syst; 2024 Sep; 35(9):12678-12692. PubMed ID: 37037246 [TBL] [Abstract][Full Text] [Related]
27. Self-organizing neural architectures and cooperative learning in a multiagent environment. Xiao D; Tan AH IEEE Trans Syst Man Cybern B Cybern; 2007 Dec; 37(6):1567-80. PubMed ID: 18179074 [TBL] [Abstract][Full Text] [Related]
28. Cooperative Search Method for Multiple UAVs Based on Deep Reinforcement Learning. Gao M; Zhang X Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146083 [TBL] [Abstract][Full Text] [Related]
29. Inverse Reinforcement Learning in Tracking Control Based on Inverse Optimal Control. Xue W; Kolaric P; Fan J; Lian B; Chai T; Lewis FL IEEE Trans Cybern; 2022 Oct; 52(10):10570-10581. PubMed ID: 33877993 [TBL] [Abstract][Full Text] [Related]
30. Reinforcement learning in continuous time and space. Doya K Neural Comput; 2000 Jan; 12(1):219-45. PubMed ID: 10636940 [TBL] [Abstract][Full Text] [Related]
31. Spiking neural networks with different reinforcement learning (RL) schemes in a multiagent setting. Christodoulou C; Cleanthous A Chin J Physiol; 2010 Dec; 53(6):447-53. PubMed ID: 21793357 [TBL] [Abstract][Full Text] [Related]
32. Multiagent Learning of Coordination in Loosely Coupled Multiagent Systems. Yu C; Zhang M; Ren F; Tan G IEEE Trans Cybern; 2015 Dec; 45(12):2853-67. PubMed ID: 25594993 [TBL] [Abstract][Full Text] [Related]
33. Continuous action deep reinforcement learning for propofol dosing during general anesthesia. Schamberg G; Badgeley M; Meschede-Krasa B; Kwon O; Brown EN Artif Intell Med; 2022 Jan; 123():102227. PubMed ID: 34998516 [TBL] [Abstract][Full Text] [Related]
34. Kernel-based least squares policy iteration for reinforcement learning. Xu X; Hu D; Lu X IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655 [TBL] [Abstract][Full Text] [Related]
35. MOSAIC for multiple-reward environments. Sugimoto N; Haruno M; Doya K; Kawato M Neural Comput; 2012 Mar; 24(3):577-606. PubMed ID: 22168558 [TBL] [Abstract][Full Text] [Related]
37. Optimal Tracking Control of Heterogeneous MASs Using Event-Driven Adaptive Observer and Reinforcement Learning. Xu Y; Sun J; Pan YJ; Wu ZG IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5577-5587. PubMed ID: 36191114 [TBL] [Abstract][Full Text] [Related]
38. Cooperative Deep Reinforcement Learning for Large-Scale Traffic Grid Signal Control. Tan T; Bao F; Deng Y; Jin A; Dai Q; Wang J IEEE Trans Cybern; 2020 Jun; 50(6):2687-2700. PubMed ID: 30946688 [TBL] [Abstract][Full Text] [Related]