These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

122 related articles for article (PubMed ID: 25167564)

  • 1. Reinforcement learning for port-hamiltonian systems.
    Sprangers O; Babuška R; Nageshrao SP; Lopes GA
    IEEE Trans Cybern; 2015 May; 45(5):1003-13. PubMed ID: 25167564
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Tracking control design for fractional order systems: A passivity-based port-Hamiltonian framework.
    Kumar L; Dhillon SS
    ISA Trans; 2023 Jul; 138():1-9. PubMed ID: 36973153
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Robust reinforcement learning.
    Morimoto J; Doya K
    Neural Comput; 2005 Feb; 17(2):335-59. PubMed ID: 15720771
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Control of a self-balancing robot with two degrees of freedom via IDA-PBC.
    Gandarilla I; Santibañez V; Sandoval J
    ISA Trans; 2019 May; 88():102-112. PubMed ID: 30583954
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Efficient model learning methods for actor-critic control.
    Grondman I; Vaandrager M; Buşoniu L; Babuska R; Schuitema E
    IEEE Trans Syst Man Cybern B Cybern; 2012 Jun; 42(3):591-602. PubMed ID: 22156998
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Off-policy reinforcement learning for H∞ control design.
    Luo B; Wu HN; Huang T
    IEEE Trans Cybern; 2015 Jan; 45(1):65-76. PubMed ID: 25532162
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Interconnection and damping assignment control based on modified actor-critic algorithm with wavelet function approximation.
    Gheibi A; Ghiasi AR; Ghaemi S; Badamchizadeh MA
    ISA Trans; 2020 Jun; 101():116-129. PubMed ID: 31955947
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Actor-Critic Learning Control With Regularization and Feature Selection in Policy Gradient Estimation.
    Li L; Li D; Song T; Xu X
    IEEE Trans Neural Netw Learn Syst; 2021 Mar; 32(3):1217-1227. PubMed ID: 32324571
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Reinforcement learning versus model predictive control: a comparison on a power system problem.
    Ernst D; Glavic M; Capitanescu F; Wehenkel L
    IEEE Trans Syst Man Cybern B Cybern; 2009 Apr; 39(2):517-29. PubMed ID: 19095542
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Partial Policy-Based Reinforcement Learning for Anatomical Landmark Localization in 3D Medical Images.
    Abdullah Al W; Yun ID
    IEEE Trans Med Imaging; 2020 Apr; 39(4):1245-1255. PubMed ID: 31603816
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Off-Policy Reinforcement Learning for Synchronization in Multiagent Graphical Games.
    Li J; Modares H; Chai T; Lewis FL; Xie L
    IEEE Trans Neural Netw Learn Syst; 2017 Oct; 28(10):2434-2445. PubMed ID: 28436891
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Approximate optimal control design for nonlinear one-dimensional parabolic PDE systems using empirical eigenfunctions and neural network.
    Luo B; Wu HN
    IEEE Trans Syst Man Cybern B Cybern; 2012 Dec; 42(6):1538-49. PubMed ID: 22588610
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Ensemble algorithms in reinforcement learning.
    Wiering MA; van Hasselt H
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):930-6. PubMed ID: 18632380
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A policy iteration approach to online optimal control of continuous-time constrained-input systems.
    Modares H; Naghibi Sistani MB; Lewis FL
    ISA Trans; 2013 Sep; 52(5):611-21. PubMed ID: 23706414
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Optimal Elevator Group Control via Deep Asynchronous Actor-Critic Learning.
    Wei Q; Wang L; Liu Y; Polycarpou MM
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5245-5256. PubMed ID: 32071000
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Characterizing Motor Control of Mastication With Soft Actor-Critic.
    Abdi AH; Sagl B; Srungarapu VP; Stavness I; Prisman E; Abolmaesumi P; Fels S
    Front Hum Neurosci; 2020; 14():188. PubMed ID: 32528267
    [TBL] [Abstract][Full Text] [Related]  

  • 17. An approach to the design of reinforcement functions in real world, agent-based applications.
    Bonarini A; Bonacina C; Matteucci M
    IEEE Trans Syst Man Cybern B Cybern; 2001; 31(3):288-301. PubMed ID: 18244793
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Humanoids Learning to Walk: A Natural CPG-Actor-Critic Architecture.
    Li C; Lowe R; Ziemke T
    Front Neurorobot; 2013; 7():5. PubMed ID: 23675345
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Actor-Critic Learning Control Based on -Regularized Temporal-Difference Prediction With Gradient Correction.
    Li L; Li D; Song T; Xu X
    IEEE Trans Neural Netw Learn Syst; 2018 Dec; 29(12):5899-5909. PubMed ID: 29993664
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.
    Kiumarsi B; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2015 Jan; 26(1):140-51. PubMed ID: 25312944
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.