These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

257 related articles for article (PubMed ID: 30296242)

  • 1. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Reinforcement Learning-Based Linear Quadratic Regulation of Continuous-Time Systems Using Dynamic Output Feedback.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2019 Jan; ():. PubMed ID: 30605117
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data.
    Kiumarsi B; Lewis FL; Naghibi-Sistani MB; Karimpour A
    IEEE Trans Cybern; 2015 Dec; 45(12):2770-9. PubMed ID: 25576591
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.
    Lewis FL; Vamvoudakis KG
    IEEE Trans Syst Man Cybern B Cybern; 2011 Feb; 41(1):14-25. PubMed ID: 20350860
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Continuous-time Q-learning for infinite-horizon discounted cost linear quadratic regulator problems.
    Palanisamy M; Modares H; Lewis FL; Aurangzeb M
    IEEE Trans Cybern; 2015 Feb; 45(2):165-76. PubMed ID: 24879648
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning.
    Modares H; Lewis FL; Zhong-Ping Jiang
    IEEE Trans Cybern; 2016 Nov; 46(11):2401-2410. PubMed ID: 28113995
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof.
    Al-Tamimi A; Lewis FL; Abu-Khalaf M
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):943-9. PubMed ID: 18632382
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Kernel-based least squares policy iteration for reinforcement learning.
    Xu X; Hu D; Lu X
    IEEE Trans Neural Netw; 2007 Jul; 18(4):973-92. PubMed ID: 17668655
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Model-Free Q-Learning for the Tracking Problem of Linear Discrete-Time Systems.
    Li C; Ding J; Lewis FL; Chai T
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3191-3201. PubMed ID: 38379236
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Optimal Output Regulation of Linear Discrete-Time Systems With Unknown Dynamics Using Reinforcement Learning.
    Jiang Y; Kiumarsi B; Fan J; Chai T; Li J; Lewis FL
    IEEE Trans Cybern; 2020 Jul; 50(7):3147-3156. PubMed ID: 30703054
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Tracking Control for Linear Discrete-Time Networked Control Systems With Unknown Dynamics and Dropout.
    Yi Jiang ; Jialu Fan ; Tianyou Chai ; Lewis FL; Jinna Li
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4607-4620. PubMed ID: 29990205
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Data-Driven H
    Zhang L; Fan J; Xue W; Lopez VG; Li J; Chai T; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3553-3567. PubMed ID: 34662280
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adaptive Optimal Output Regulation of Time-Delay Systems via Measurement Feedback.
    Gao W; Jiang ZP
    IEEE Trans Neural Netw Learn Syst; 2019 Mar; 30(3):938-945. PubMed ID: 30047903
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems.
    Zhang H; Qin C; Jiang B; Luo Y
    IEEE Trans Cybern; 2014 Dec; 44(12):2706-18. PubMed ID: 25095274
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Approximate dynamic programming for optimal stationary control with control-dependent noise.
    Jiang Y; Jiang ZP
    IEEE Trans Neural Netw; 2011 Dec; 22(12):2392-8. PubMed ID: 21954203
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning-Based Predictive Control for Discrete-Time Nonlinear Systems With Stochastic Disturbances.
    Xu X; Chen H; Lian C; Li D
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):6202-6213. PubMed ID: 29993751
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Event-Triggered Adaptive Optimal Control With Output Feedback: An Adaptive Dynamic Programming Approach.
    Zhao F; Gao W; Jiang ZP; Liu T
    IEEE Trans Neural Netw Learn Syst; 2021 Nov; 32(11):5208-5221. PubMed ID: 33035169
    [TBL] [Abstract][Full Text] [Related]  

  • 18. An iterative Q-learning based global consensus of discrete-time saturated multi-agent systems.
    Long M; Su H; Wang X; Jiang GP; Wang X
    Chaos; 2019 Oct; 29(10):103127. PubMed ID: 31675802
    [TBL] [Abstract][Full Text] [Related]  

  • 19. H
    Valadbeigi AP; Sedigh AK; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):396-406. PubMed ID: 31021775
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Autonomous Collision Avoidance Using MPC with LQR-Based Weight Transformation.
    Taherian S; Halder K; Dixit S; Fallah S
    Sensors (Basel); 2021 Jun; 21(13):. PubMed ID: 34201820
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 13.