These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

175 related articles for article (PubMed ID: 29990205)

  • 1. Tracking Control for Linear Discrete-Time Networked Control Systems With Unknown Dynamics and Dropout.
    Yi Jiang ; Jialu Fan ; Tianyou Chai ; Lewis FL; Jinna Li
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4607-4620. PubMed ID: 29990205
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Off-policy two-dimensional reinforcement learning for optimal tracking control of batch processes with network-induced dropout and disturbances.
    Jiang X; Huang M; Shi H; Wang X; Zhang Y
    ISA Trans; 2024 Jan; 144():228-244. PubMed ID: 38030447
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Networked controller and observer design of discrete-time systems with inaccurate model parameters.
    Li J; Xiao Z; Li P; Ding Z
    ISA Trans; 2020 Mar; 98():75-86. PubMed ID: 31466726
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Optimal Tracking Control of Unknown Discrete-Time Linear Systems Using Input-Output Measured Data.
    Kiumarsi B; Lewis FL; Naghibi-Sistani MB; Karimpour A
    IEEE Trans Cybern; 2015 Dec; 45(12):2770-9. PubMed ID: 25576591
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning.
    Modares H; Lewis FL; Zhong-Ping Jiang
    IEEE Trans Cybern; 2016 Nov; 46(11):2401-2410. PubMed ID: 28113995
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adaptive Optimal Control of Networked Nonlinear Systems With Stochastic Sensor and Actuator Dropouts Based on Reinforcement Learning.
    Jiang Y; Liu L; Feng G
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3107-3120. PubMed ID: 35731768
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Policy Iteration Q-Learning for Data-Based Two-Player Zero-Sum Game of Linear Discrete-Time Systems.
    Luo B; Yang Y; Liu D
    IEEE Trans Cybern; 2021 Jul; 51(7):3630-3640. PubMed ID: 32092032
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Optimal Robust Output Containment of Unknown Heterogeneous Multiagent System Using Off-Policy Reinforcement Learning.
    Zuo S; Song Y; Lewis FL; Davoudi A
    IEEE Trans Cybern; 2018 Nov; 48(11):3197-3207. PubMed ID: 29989978
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
    Kiumarsi B; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):140-51. PubMed ID: 25312944
    [TBL] [Abstract][Full Text] [Related]  

  • 11. H ∞ tracking control of completely unknown continuous-time systems via off-policy reinforcement learning.
    Modares H; Lewis FL; Jiang ZP
    IEEE Trans Neural Netw Learn Syst; 2015 Oct; 26(10):2550-62. PubMed ID: 26111401
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Off-Policy Interleaved Q -Learning: Optimal Control for Affine Nonlinear Discrete-Time Systems.
    Li J; Chai T; Lewis FL; Ding Z; Jiang Y
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1308-1320. PubMed ID: 30273155
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Leader-Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader Using Reinforcement Learning.
    Yang Y; Modares H; Wunsch DC; Yin Y
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2139-2153. PubMed ID: 29771667
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.
    Fu Y; Chai T
    IEEE Trans Neural Netw Learn Syst; 2016 Dec; 27(12):2577-2587. PubMed ID: 26600376
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Model-Free Optimal Tracking Control via Critic-Only Q-Learning.
    Luo B; Liu D; Huang T; Wang D
    IEEE Trans Neural Netw Learn Syst; 2016 Oct; 27(10):2134-44. PubMed ID: 27416608
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm.
    Zhang H; Wei Q; Luo Y
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):937-42. PubMed ID: 18632381
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Data-Driven Robust Control of Discrete-Time Uncertain Linear Systems via Off-Policy Reinforcement Learning.
    Yang Y; Guo Z; Xiong H; Ding DW; Yin Y; Wunsch DC
    IEEE Trans Neural Netw Learn Syst; 2019 Dec; 30(12):3735-3747. PubMed ID: 30843810
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Reinforcement Learning-Based Linear Quadratic Regulation of Continuous-Time Systems Using Dynamic Output Feedback.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2019 Jan; ():. PubMed ID: 30605117
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Discrete-Time Non-Zero-Sum Games With Completely Unknown Dynamics.
    Song R; Wei Q; Zhang H; Lewis FL
    IEEE Trans Cybern; 2021 Jun; 51(6):2929-2943. PubMed ID: 31902792
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Fuzzy H
    Wang J; Wu J; Shen H; Cao J; Rutkowski L
    IEEE Trans Cybern; 2023 Nov; 53(11):7380-7391. PubMed ID: 36417712
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.