These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

145 related articles for article (PubMed ID: 32092032)

  • 1. Policy Iteration Q-Learning for Data-Based Two-Player Zero-Sum Game of Linear Discrete-Time Systems.
    Luo B; Yang Y; Liu D
    IEEE Trans Cybern; 2021 Jul; 51(7):3630-3640. PubMed ID: 32092032
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Solving the Zero-Sum Control Problem for Tidal Turbine System: An Online Reinforcement Learning Approach.
    Fang H; Zhang M; He S; Luan X; Liu F; Ding Z
    IEEE Trans Cybern; 2023 Dec; 53(12):7635-7647. PubMed ID: 35839191
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Discrete-Time Non-Zero-Sum Games With Completely Unknown Dynamics.
    Song R; Wei Q; Zhang H; Lewis FL
    IEEE Trans Cybern; 2021 Jun; 51(6):2929-2943. PubMed ID: 31902792
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Model-Free λ-Policy Iteration for Discrete-Time Linear Quadratic Regulation.
    Yang Y; Kiumarsi B; Modares H; Xu C
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):635-649. PubMed ID: 34379597
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Fuzzy H
    Wang J; Wu J; Shen H; Cao J; Rutkowski L
    IEEE Trans Cybern; 2023 Nov; 53(11):7380-7391. PubMed ID: 36417712
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Online Solution of Two-Player Zero-Sum Games for Continuous-Time Nonlinear Systems With Completely Unknown Dynamics.
    Fu Y; Chai T
    IEEE Trans Neural Netw Learn Syst; 2016 Dec; 27(12):2577-2587. PubMed ID: 26600376
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Neural Q-learning for discrete-time nonlinear zero-sum games with adjustable convergence rate.
    Wang Y; Wang D; Zhao M; Liu N; Qiao J
    Neural Netw; 2024 Jul; 175():106274. PubMed ID: 38583264
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Tracking Control for Linear Discrete-Time Networked Control Systems With Unknown Dynamics and Dropout.
    Yi Jiang ; Jialu Fan ; Tianyou Chai ; Lewis FL; Jinna Li
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4607-4620. PubMed ID: 29990205
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Off-Policy Interleaved Q -Learning: Optimal Control for Affine Nonlinear Discrete-Time Systems.
    Li J; Chai T; Lewis FL; Ding Z; Jiang Y
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1308-1320. PubMed ID: 30273155
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Output Feedback Q-Learning Control for the Discrete-Time Linear Quadratic Regulator Problem.
    Rizvi SAA; Lin Z
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1523-1536. PubMed ID: 30296242
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Secure Control for Markov Jump Cyber-Physical Systems Subject to Malicious Attacks: A Resilient Hybrid Learning Scheme.
    Shen H; Wang Y; Wu J; Park JH; Wang J
    IEEE Trans Cybern; 2024 Nov; 54(11):7068-7079. PubMed ID: 39240742
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.
    Zhu Y; Zhao D
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1228-1241. PubMed ID: 33306474
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Data-Driven H
    Liu Q; Yan H; Zhang H; Wang M; Tian Y
    IEEE Trans Cybern; 2024 Aug; PP():. PubMed ID: 39120994
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Hybrid Reinforcement Learning for Optimal Control of Non-Linear Switching System.
    Li X; Dong L; Xue L; Sun C
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):9161-9170. PubMed ID: 35417353
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Policy Iteration Q -Learning for Linear Itô Stochastic Systems With Markovian Jumps and Its Application to Power Systems.
    Ming Z; Zhang H; Wang Y; Dai J
    IEEE Trans Cybern; 2024 Jun; PP():. PubMed ID: 38865225
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Model-Free Q-Learning for the Tracking Problem of Linear Discrete-Time Systems.
    Li C; Ding J; Lewis FL; Chai T
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3191-3201. PubMed ID: 38379236
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Evolving and Incremental Value Iteration Schemes for Nonlinear Discrete-Time Zero-Sum Games.
    Zhao M; Wang D; Ha M; Qiao J
    IEEE Trans Cybern; 2023 Jul; 53(7):4487-4499. PubMed ID: 36063514
    [TBL] [Abstract][Full Text] [Related]  

  • 18. An iterative Q-learning based global consensus of discrete-time saturated multi-agent systems.
    Long M; Su H; Wang X; Jiang GP; Wang X
    Chaos; 2019 Oct; 29(10):103127. PubMed ID: 31675802
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Model-Free Optimal Tracking Control via Critic-Only Q-Learning.
    Luo B; Liu D; Huang T; Wang D
    IEEE Trans Neural Netw Learn Syst; 2016 Oct; 27(10):2134-44. PubMed ID: 27416608
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.
    Zhu Y; Zhao D; Li X
    IEEE Trans Neural Netw Learn Syst; 2017 Mar; 28(3):714-725. PubMed ID: 27249839
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.