These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

138 related articles for article (PubMed ID: 33886483)

  • 1. Data-Driven Dynamic Multiobjective Optimal Control: An Aspiration-Satisfying Reinforcement Learning Approach.
    Mazouchi M; Yang Y; Modares H
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6183-6193. PubMed ID: 33886483
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors.
    Yang Y; Modares H; Vamvoudakis KG; He W; Xu CZ; Wunsch DC
    IEEE Trans Cybern; 2022 Dec; 52(12):13762-13773. PubMed ID: 34495864
    [TBL] [Abstract][Full Text] [Related]  

  • 3. An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning.
    Yao S; Liu X; Zhang Y; Cui Z
    Math Biosci Eng; 2022 Jun; 19(9):9258-9290. PubMed ID: 35942758
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Hamiltonian-Driven Adaptive Dynamic Programming With Efficient Experience Replay.
    Yang Y; Pan Y; Xu CZ; Wunsch DC
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3278-3290. PubMed ID: 36279344
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimal Output Regulation of Linear Discrete-Time Systems With Unknown Dynamics Using Reinforcement Learning.
    Jiang Y; Kiumarsi B; Fan J; Chai T; Li J; Lewis FL
    IEEE Trans Cybern; 2020 Jul; 50(7):3147-3156. PubMed ID: 30703054
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Model-Free Reinforcement Learning for Fully Cooperative Consensus Problem of Nonlinear Multiagent Systems.
    Wang H; Li M
    IEEE Trans Neural Netw Learn Syst; 2022 Apr; 33(4):1482-1491. PubMed ID: 33338022
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
    Luo B; Wu HN; Huang T; Liu D
    Neural Netw; 2015 Nov; 71():150-8. PubMed ID: 26356598
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Treed Gaussian Process Regression for Solving Offline Data-Driven Continuous Multiobjective Optimization Problems.
    Mazumdar A; López-Ibáñez M; Chugh T; Hakanen J; Miettinen K
    Evol Comput; 2023 Dec; 31(4):375-399. PubMed ID: 37126577
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Optimal Synchronization Control of Multiagent Systems With Input Saturation via Off-Policy Reinforcement Learning.
    Qin J; Li M; Shi Y; Ma Q; Zheng WX
    IEEE Trans Neural Netw Learn Syst; 2019 Jan; 30(1):85-96. PubMed ID: 29993726
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Action Mapping: A Reinforcement Learning Method for Constrained-Input Systems.
    Yuan X; Wang Y; Liu J; Sun C
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7145-7157. PubMed ID: 35025751
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A policy iteration approach to online optimal control of continuous-time constrained-input systems.
    Modares H; Naghibi Sistani MB; Lewis FL
    ISA Trans; 2013 Sep; 52(5):611-21. PubMed ID: 23706414
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear Systems.
    Gao W; Jiang ZP; Weinan Gao ; Zhong-Ping Jiang ; Gao W; Jiang ZP
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2614-2624. PubMed ID: 29771677
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Continuous-Time Time-Varying Policy Iteration.
    Wei Q; Liao Z; Yang Z; Li B; Liu D
    IEEE Trans Cybern; 2020 Dec; 50(12):4958-4971. PubMed ID: 31329153
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Reinforcement Learning for Robust Dynamic Event-Driven Constrained Control.
    Yang X; Wang D
    IEEE Trans Neural Netw Learn Syst; 2024 May; PP():. PubMed ID: 38700967
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Approximate Dynamic Programming for Nonlinear-Constrained Optimizations.
    Yang X; He H; Zhong X
    IEEE Trans Cybern; 2021 May; 51(5):2419-2432. PubMed ID: 31329149
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Hierarchical Optimal Synchronization for Linear Systems via Reinforcement Learning: A Stackelberg-Nash Game Perspective.
    Li M; Qin J; Ma Q; Zheng WX; Kang Y
    IEEE Trans Neural Netw Learn Syst; 2021 Apr; 32(4):1600-1611. PubMed ID: 32340962
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Solving Multiobjective Constrained Trajectory Optimization Problem by an Extended Evolutionary Algorithm.
    Chai R; Savvaris A; Tsourdos A; Xia Y; Chai S
    IEEE Trans Cybern; 2020 Apr; 50(4):1630-1643. PubMed ID: 30489277
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Dual Heuristic Programming for Optimal Control of Continuous-Time Nonlinear Systems Using Single Echo State Network.
    Liu C; Zhang H; Luo Y; Su H
    IEEE Trans Cybern; 2022 Mar; 52(3):1701-1712. PubMed ID: 32396118
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reinforcement Learning and Adaptive Optimal Control for Continuous-Time Nonlinear Systems: A Value Iteration Approach.
    Bian T; Jiang ZP
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):2781-2790. PubMed ID: 33417569
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Hamiltonian-Driven Adaptive Dynamic Programming for Continuous Nonlinear Dynamical Systems.
    Yang Y; Wunsch D; Yin Y
    IEEE Trans Neural Netw Learn Syst; 2017 Aug; 28(8):1929-1940. PubMed ID: 28166510
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.