These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 30605117)

  • 1. Reinforcement Learning-Based Linear Quadratic Regulation of Continuous-Time Systems Using Dynamic Output Feedback.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2019 Jan; ():. PubMed ID: 30605117
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data.
    Kiumarsi B; Lewis FL; Naghibi-Sistani MB; Karimpour A
    IEEE Trans Cybern; 2015 Dec; 45(12):2770-9. PubMed ID: 25576591
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning.
    Modares H; Lewis FL; Zhong-Ping Jiang
    IEEE Trans Cybern; 2016 Nov; 46(11):2401-2410. PubMed ID: 28113995
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Event-Triggered Adaptive Optimal Control With Output Feedback: An Adaptive Dynamic Programming Approach.
    Zhao F; Gao W; Jiang ZP; Liu T
    IEEE Trans Neural Netw Learn Syst; 2021 Nov; 32(11):5208-5221. PubMed ID: 33035169
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adaptive Dynamic Programming for Model-Free Global Stabilization of Control Constrained Continuous-Time Systems.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2022 Feb; 52(2):1048-1060. PubMed ID: 32471805
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Model-Free λ-Policy Iteration for Discrete-Time Linear Quadratic Regulation.
    Yang Y; Kiumarsi B; Modares H; Xu C
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):635-649. PubMed ID: 34379597
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Optimal Output Regulation of Linear Discrete-Time Systems With Unknown Dynamics Using Reinforcement Learning.
    Jiang Y; Kiumarsi B; Fan J; Chai T; Li J; Lewis FL
    IEEE Trans Cybern; 2020 Jul; 50(7):3147-3156. PubMed ID: 30703054
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof.
    Al-Tamimi A; Lewis FL; Abu-Khalaf M
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):943-9. PubMed ID: 18632382
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Optimal Robust Output Containment of Unknown Heterogeneous Multiagent System Using Off-Policy Reinforcement Learning.
    Zuo S; Song Y; Lewis FL; Davoudi A
    IEEE Trans Cybern; 2018 Nov; 48(11):3197-3207. PubMed ID: 29989978
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Reinforcement Learning and Adaptive Optimal Control for Continuous-Time Nonlinear Systems: A Value Iteration Approach.
    Bian T; Jiang ZP
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2781-2790. PubMed ID: 33417569
    [TBL] [Abstract][Full Text] [Related]  

  • 12. An Implicit Function-Based Adaptive Control Scheme for Noncanonical-Form Discrete-Time Neural-Network Systems.
    Zhang Y; Tao G; Chen M; Chen W; Zhang Z
    IEEE Trans Cybern; 2021 Dec; 51(12):5728-5739. PubMed ID: 31940572
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.
    Lewis FL; Vamvoudakis KG
    IEEE Trans Syst Man Cybern B Cybern; 2011 Feb; 41(1):14-25. PubMed ID: 20350860
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Output-Feedback Robust Control of Uncertain Systems via Online Data-Driven Learning.
    Na J; Zhao J; Gao G; Li Z
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2650-2662. PubMed ID: 32706646
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Hybrid Reinforcement Learning for Optimal Control of Non-Linear Switching System.
    Li X; Dong L; Xue L; Sun C
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):9161-9170. PubMed ID: 35417353
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Optimal Tracking Control of Heterogeneous MASs Using Event-Driven Adaptive Observer and Reinforcement Learning.
    Xu Y; Sun J; Pan YJ; Wu ZG
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5577-5587. PubMed ID: 36191114
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Autonomous Collision Avoidance Using MPC with LQR-Based Weight Transformation.
    Taherian S; Halder K; Dixit S; Fallah S
    Sensors (Basel); 2021 Jun; 21(13):. PubMed ID: 34201820
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Neural Network-Based Model-Free Adaptive Near-Optimal Tracking Control for a Class of Nonlinear Systems.
    Zhang Y; Li S; Liu X
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):6227-6241. PubMed ID: 29993754
    [TBL] [Abstract][Full Text] [Related]  

  • 19. H
    Valadbeigi AP; Sedigh AK; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):396-406. PubMed ID: 31021775
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Policy Iteration-Based Learning Design for Linear Continuous-Time Systems Under Initial Stabilizing OPFB Policy.
    Zhang C; Chen C; Lewis FL; Xie S
    IEEE Trans Cybern; 2024 Jul; PP():. PubMed ID: 39037879
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.