These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

134 related articles for article (PubMed ID: 31021775)

  • 1. H
    Valadbeigi AP; Sedigh AK; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):396-406. PubMed ID: 31021775
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Optimal Output-Feedback Control of Unknown Continuous-Time Linear Systems Using Off-policy Reinforcement Learning.
    Modares H; Lewis FL; Zhong-Ping Jiang
    IEEE Trans Cybern; 2016 Nov; 46(11):2401-2410. PubMed ID: 28113995
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Data-Driven H
    Zhang L; Fan J; Xue W; Lopez VG; Li J; Chai T; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3553-3567. PubMed ID: 34662280
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Fuzzy H
    Wang J; Wu J; Shen H; Cao J; Rutkowski L
    IEEE Trans Cybern; 2023 Nov; 53(11):7380-7391. PubMed ID: 36417712
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Inverse Q-Learning Using Input-Output Data.
    Lian B; Xue W; Lewis FL; Davoudi A
    IEEE Trans Cybern; 2024 Feb; 54(2):728-738. PubMed ID: 38133983
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems.
    Zhang H; Qin C; Jiang B; Luo Y
    IEEE Trans Cybern; 2014 Dec; 44(12):2706-18. PubMed ID: 25095274
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Reinforcement Learning-Based Linear Quadratic Regulation of Continuous-Time Systems Using Dynamic Output Feedback.
    Rizvi SAA; Lin Z
    IEEE Trans Cybern; 2019 Jan; ():. PubMed ID: 30605117
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Dynamic output feedback H
    Li L; Li X; Ye S
    ISA Trans; 2019 Dec; 95():1-10. PubMed ID: 30611523
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An iterative Q-learning based global consensus of discrete-time saturated multi-agent systems.
    Long M; Su H; Wang X; Jiang GP; Wang X
    Chaos; 2019 Oct; 29(10):103127. PubMed ID: 31675802
    [TBL] [Abstract][Full Text] [Related]  

  • 11. H
    Zhang H; Han J; Wang Y; Jiang H
    IEEE Trans Cybern; 2019 Oct; 49(10):3713-3721. PubMed ID: 30004898
    [TBL] [Abstract][Full Text] [Related]  

  • 12. H
    Jiang Y; Zhang K; Wu J; Zhang C; Xue W; Chai T; Lewis FL
    IEEE Trans Cybern; 2022 Oct; 52(10):10078-10088. PubMed ID: 33750726
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Optimal Output Regulation of Linear Discrete-Time Systems With Unknown Dynamics Using Reinforcement Learning.
    Jiang Y; Kiumarsi B; Fan J; Chai T; Li J; Lewis FL
    IEEE Trans Cybern; 2020 Jul; 50(7):3147-3156. PubMed ID: 30703054
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Dynamic Intermittent Feedback Design for H
    Yang Y; Modares H; Vamvoudakis KG; Yin Y; Wunsch DC
    IEEE Trans Cybern; 2020 Aug; 50(8):3752-3765. PubMed ID: 31478887
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Mixed H2/Hinfinity output-feedback control of second-order neutral systems with time-varying state and input delays.
    Karimi HR; Gao H
    ISA Trans; 2008 Jul; 47(3):311-24. PubMed ID: 18501358
    [TBL] [Abstract][Full Text] [Related]  

  • 16. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
    Liu M; Zhang S; Chen H; Sheng W
    IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1928-35. PubMed ID: 25291744
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Inverse Reinforcement Learning for Trajectory Imitation Using Static Output Feedback Control.
    Xue W; Lian B; Fan J; Chai T; Lewis FL
    IEEE Trans Cybern; 2024 Mar; 54(3):1695-1707. PubMed ID: 37027769
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Dynamic output feedback H
    Kazemy A; Gyurkovics É; Takács T
    ISA Trans; 2020 Jan; 96():185-194. PubMed ID: 31202534
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.
    Lewis FL; Vamvoudakis KG
    IEEE Trans Syst Man Cybern B Cybern; 2011 Feb; 41(1):14-25. PubMed ID: 20350860
    [TBL] [Abstract][Full Text] [Related]  

  • 20. H
    Huo S; Zhang Y
    ISA Trans; 2020 Apr; 99():28-36. PubMed ID: 31561874
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.