These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

87 related articles for article (PubMed ID: 25585427)

  • 1. Optimal critic learning for robot control in time-varying environments.
    Wang C; Li Y; Ge SS; Lee TH
    IEEE Trans Neural Netw Learn Syst; 2015 Oct; 26(10):2301-10. PubMed ID: 25585427
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Neural Networks Enhanced Optimal Admittance Control of Robot-Environment Interaction Using Reinforcement Learning.
    Peng G; Chen CLP; Yang C
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4551-4561. PubMed ID: 33651696
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Impedance learning for robotic contact tasks using natural actor-critic algorithm.
    Kim B; Park J; Park S; Kang S
    IEEE Trans Syst Man Cybern B Cybern; 2010 Apr; 40(2):433-43. PubMed ID: 19696001
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
    Kiumarsi B; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):140-51. PubMed ID: 25312944
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Model-Free Optimal Tracking Control via Critic-Only Q-Learning.
    Luo B; Liu D; Huang T; Wang D
    IEEE Trans Neural Netw Learn Syst; 2016 Oct; 27(10):2134-44. PubMed ID: 27416608
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.
    Liu YJ; Tang L; Tong S; Chen CL; Li DJ
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):165-76. PubMed ID: 25438326
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Stochastic Optimal Control for Robot Manipulation Skill Learning Under Time-Varying Uncertain Environment.
    Liu X; Liu Z; Huang P
    IEEE Trans Cybern; 2024 Apr; 54(4):2015-2025. PubMed ID: 36256715
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Reference Adaptation for Robots in Physical Interactions With Unknown Environments.
    Wang C; Li Y; Ge SS; Lee TH
    IEEE Trans Cybern; 2017 Nov; 47(11):3504-3515. PubMed ID: 27214923
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Neural network learning of robot arm impedance in operational space.
    Tsuji T; Ito K; Morasso PG
    IEEE Trans Syst Man Cybern B Cybern; 1996; 26(2):290-8. PubMed ID: 18263030
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Policy improvement by a model-free Dyna architecture.
    Hwang KS; Lo CY
    IEEE Trans Neural Netw Learn Syst; 2013 May; 24(5):776-88. PubMed ID: 24808427
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Neural Networks Enhanced Adaptive Admittance Control of Optimized Robot-Environment Interaction.
    Yang C; Peng G; Li Y; Cui R; Cheng L; Li Z
    IEEE Trans Cybern; 2019 Jul; 49(7):2568-2579. PubMed ID: 29993904
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.
    Song R; Lewis FL; Wei Q; Zhang H
    IEEE Trans Cybern; 2016 May; 46(5):1041-50. PubMed ID: 25935054
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.
    Modares H; Lewis FL; Naghibi-Sistani MB
    IEEE Trans Neural Netw Learn Syst; 2013 Oct; 24(10):1513-25. PubMed ID: 24808590
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Control of nonaffine nonlinear discrete-time systems using reinforcement-learning-based linearly parameterized neural networks.
    Yang Q; Vance JB; Jagannathan S
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):994-1001. PubMed ID: 18632390
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adaptive learning in tracking control based on the dual critic network design.
    Ni Z; He H; Wen J
    IEEE Trans Neural Netw Learn Syst; 2013 Jun; 24(6):913-28. PubMed ID: 24808473
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Improved Adaptive-Reinforcement Learning Control for morphing unmanned air vehicles.
    Valasek J; Doebbler J; Tandale MD; Meade AJ
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):1014-20. PubMed ID: 18632393
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Deep Multi-Critic Network for accelerating Policy Learning in multi-agent environments.
    Hook J; Silva V; Kondoz A
    Neural Netw; 2020 Aug; 128():97-106. PubMed ID: 32446194
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Smooth trajectory tracking of three-link robot: a self-organizing CMAC approach.
    Hwang KS; Lin CS
    IEEE Trans Syst Man Cybern B Cybern; 1998; 28(5):680-92. PubMed ID: 18255987
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.
    Liu D; Yang X; Wang D; Wei Q
    IEEE Trans Cybern; 2015 Jul; 45(7):1372-85. PubMed ID: 25872221
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Multi Pseudo Q-Learning-Based Deterministic Policy Gradient for Tracking Control of Autonomous Underwater Vehicles.
    Shi W; Song S; Wu C; Chen CLP
    IEEE Trans Neural Netw Learn Syst; 2019 Dec; 30(12):3534-3546. PubMed ID: 30602426
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.