These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 24879648)

  • 1. Continuous-time Q-learning for infinite-horizon discounted cost linear quadratic regulator problems.
    Palanisamy M; Modares H; Lewis FL; Aurangzeb M
    IEEE Trans Cybern; 2015 Feb; 45(2):165-76. PubMed ID: 24879648
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof.
    Al-Tamimi A; Lewis FL; Abu-Khalaf M
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):943-9. PubMed ID: 18632382
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control.
    Brunton SL; Brunton BW; Proctor JL; Kutz JN
    PLoS One; 2016; 11(2):e0150171. PubMed ID: 26919740
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Solutions to the Inverse LQR Problem with Application to Biological Systems Analysis.
    Priess MC; Conway R; Choi J; Popovich JM; Radcliffe C
    IEEE Trans Control Syst Technol; 2015 Mar; 23(2):770-777. PubMed ID: 26640359
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data.
    Kiumarsi B; Lewis FL; Naghibi-Sistani MB; Karimpour A
    IEEE Trans Cybern; 2015 Dec; 45(12):2770-9. PubMed ID: 25576591
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Optimal linear-consensus algorithms: an LQR perspective.
    Cao Y; Ren W
    IEEE Trans Syst Man Cybern B Cybern; 2010 Jun; 40(3):819-30. PubMed ID: 19884088
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Reinforcement Learning-Based Linear Quadratic Regulation of Continuous-Time Systems Using Dynamic Output Feedback.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2019 Jan; ():. PubMed ID: 30605117
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
    Kiumarsi B; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):140-51. PubMed ID: 25312944
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Output Feedback Q-Learning for Linear-Quadratic Discrete-Time Finite-Horizon Control Problems.
    Calafiore GC; Possieri C
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):3274-3281. PubMed ID: 32745011
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Distributed LQR Optimal Protocol for Leader-Following Consensus.
    Sun H; Liu Y; Li F; Niu X
    IEEE Trans Cybern; 2019 Sep; 49(9):3532-3546. PubMed ID: 30040671
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Design of infinite horizon LQR controller for discrete delay systems in satellite orbit control: A predictive controller and reduction method approach.
    Khosravi M; Azarinfar H; Sabzevari K
    Heliyon; 2024 Jan; 10(2):e24265. PubMed ID: 38312572
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations.
    Lee JY; Park JB; Choi YH
    IEEE Trans Neural Netw Learn Syst; 2015 May; 26(5):916-32. PubMed ID: 25163070
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm.
    Zhang H; Wei Q; Luo Y
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):937-42. PubMed ID: 18632381
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Off-Policy Reinforcement Learning for Tracking in Continuous-Time Systems on Two Time Scales.
    Xue W; Fan J; Lopez VG; Jiang Y; Chai T; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4334-4346. PubMed ID: 32903187
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Model-Free Q-Learning for the Tracking Problem of Linear Discrete-Time Systems.
    Li C; Ding J; Lewis FL; Chai T
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3191-3201. PubMed ID: 38379236
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Optimal control of unknown affine nonlinear discrete-time systems using offline-trained neural networks with proof of convergence.
    Dierks T; Thumati BT; Jagannathan S
    Neural Netw; 2009; 22(5-6):851-60. PubMed ID: 19596551
    [TBL] [Abstract][Full Text] [Related]  

  • 18. MEC--a near-optimal online reinforcement learning algorithm for continuous deterministic systems.
    Zhao D; Zhu Y
    IEEE Trans Neural Netw Learn Syst; 2015 Feb; 26(2):346-56. PubMed ID: 25474812
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Discrete state space modeling and control of nonlinear unknown systems.
    Savran A
    ISA Trans; 2013 Nov; 52(6):795-806. PubMed ID: 23978661
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.
    Vrabie D; Lewis F
    Neural Netw; 2009 Apr; 22(3):237-46. PubMed ID: 19362449
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.