These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

136 related articles for article (PubMed ID: 34662280)

  • 1. Data-Driven H
    Zhang L; Fan J; Xue W; Lopez VG; Li J; Chai T; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3553-3567. PubMed ID: 34662280
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Off-Policy Interleaved Q -Learning: Optimal Control for Affine Nonlinear Discrete-Time Systems.
    Li J; Chai T; Lewis FL; Ding Z; Jiang Y
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1308-1320. PubMed ID: 30273155
    [TBL] [Abstract][Full Text] [Related]  

  • 3. H
    Valadbeigi AP; Sedigh AK; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):396-406. PubMed ID: 31021775
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Inverse Q-Learning Using Input-Output Data.
    Lian B; Xue W; Lewis FL; Davoudi A
    IEEE Trans Cybern; 2024 Feb; 54(2):728-738. PubMed ID: 38133983
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems.
    Zhang H; Qin C; Jiang B; Luo Y
    IEEE Trans Cybern; 2014 Dec; 44(12):2706-18. PubMed ID: 25095274
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning.
    Modares H; Lewis FL; Zhong-Ping Jiang
    IEEE Trans Cybern; 2016 Nov; 46(11):2401-2410. PubMed ID: 28113995
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Synergetic learning for unknown nonlinear H
    Zhu L; Guo P; Wei Q
    Neural Netw; 2023 Nov; 168():287-299. PubMed ID: 37774514
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate.
    Wang Y; Wang D; Zhao M; Liu N; Qiao J
    Neural Netw; 2024 Jul; 175():106274. PubMed ID: 38583264
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Model-Free Optimal Tracking Control of Nonlinear Input-Affine Discrete-Time Systems via an Iterative Deterministic Q-Learning Algorithm.
    Song S; Zhu M; Dai X; Gong D
    IEEE Trans Neural Netw Learn Syst; 2022 Jun; PP():. PubMed ID: 35657846
    [TBL] [Abstract][Full Text] [Related]  

  • 10. H∞ static output feedback control for nonlinear networked control systems with time delays and packet dropouts.
    Jiang S; Fang H
    ISA Trans; 2013 Mar; 52(2):215-22. PubMed ID: 23206869
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Optimal H
    Peng Z; Ji H; Zou C; Kuang Y; Cheng H; Shi K; Ghosh BK
    Neural Netw; 2023 Jul; 164():105-114. PubMed ID: 37148606
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Reinforcement-Learning-Based Disturbance Rejection Control for Uncertain Nonlinear Systems.
    Ran M; Li J; Xie L
    IEEE Trans Cybern; 2022 Sep; 52(9):9621-9633. PubMed ID: 33729973
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Event-driven H
    Yang X; Gao Z; Zhang J
    Neural Netw; 2020 Dec; 132():30-42. PubMed ID: 32861146
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Optimal Output Regulation of Linear Discrete-Time Systems With Unknown Dynamics Using Reinforcement Learning.
    Jiang Y; Kiumarsi B; Fan J; Chai T; Li J; Lewis FL
    IEEE Trans Cybern; 2020 Jul; 50(7):3147-3156. PubMed ID: 30703054
    [TBL] [Abstract][Full Text] [Related]  

  • 15. H
    Jiang Y; Zhang K; Wu J; Zhang C; Xue W; Chai T; Lewis FL
    IEEE Trans Cybern; 2022 Oct; 52(10):10078-10088. PubMed ID: 33750726
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Inverse Reinforcement Learning for Trajectory Imitation Using Static Output Feedback Control.
    Xue W; Lian B; Fan J; Chai T; Lewis FL
    IEEE Trans Cybern; 2024 Mar; 54(3):1695-1707. PubMed ID: 37027769
    [TBL] [Abstract][Full Text] [Related]  

  • 17. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
    Liu M; Zhang S; Chen H; Sheng W
    IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1928-35. PubMed ID: 25291744
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Policy Iteration-Based Learning Design for Linear Continuous-Time Systems Under Initial Stabilizing OPFB Policy.
    Zhang C; Chen C; Lewis FL; Xie S
    IEEE Trans Cybern; 2024 Jul; PP():. PubMed ID: 39037879
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.
    Lewis FL; Vamvoudakis KG
    IEEE Trans Syst Man Cybern B Cybern; 2011 Feb; 41(1):14-25. PubMed ID: 20350860
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.