These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

257 related articles for article (PubMed ID: 15671597)

  • 1. The role of multisensor data fusion in neuromuscular control of a sagittal arm with a pair of muscles using actor-critic reinforcement learning method.
    Golkhou V; Parnianpour M; Lucas C
    Technol Health Care; 2004; 12(6):425-38. PubMed ID: 15671597
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Neuromuscular control of the point to point and oscillatory movements of a sagittal arm with the actor-critic reinforcement learning method.
    Golkhou V; Parnianpour M; Lucas C
    Comput Methods Biomech Biomed Engin; 2005 Apr; 8(2):103-13. PubMed ID: 16154874
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Stability and movement of a one-link neuromusculoskeletal sagittal arm.
    Dinneen JA; Hemami H
    IEEE Trans Biomed Eng; 1993 Jun; 40(6):541-8. PubMed ID: 8262535
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A model of the cerebellar pathways applied to the control of a single-joint robot arm actuated by McKibben artificial muscles.
    Eskiizmirliler S; Forestier N; Tondu B; Darlot C
    Biol Cybern; 2002 May; 86(5):379-94. PubMed ID: 11984652
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Reinforcement learning for a biped robot based on a CPG-actor-critic method.
    Nakamura Y; Mori T; Sato MA; Ishii S
    Neural Netw; 2007 Aug; 20(6):723-35. PubMed ID: 17412559
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Impedance learning for robotic contact tasks using natural actor-critic algorithm.
    Kim B; Park J; Park S; Kang S
    IEEE Trans Syst Man Cybern B Cybern; 2010 Apr; 40(2):433-43. PubMed ID: 19696001
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A spiking neural network model of an actor-critic learning agent.
    Potjans W; Morrison A; Diesmann M
    Neural Comput; 2009 Feb; 21(2):301-39. PubMed ID: 19196231
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Control of nonaffine nonlinear discrete-time systems using reinforcement-learning-based linearly parameterized neural networks.
    Yang Q; Vance JB; Jagannathan S
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):994-1001. PubMed ID: 18632390
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Reinforcement learning of motor skills with policy gradients.
    Peters J; Schaal S
    Neural Netw; 2008 May; 21(4):682-97. PubMed ID: 18482830
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Improved Adaptive-Reinforcement Learning Control for morphing unmanned air vehicles.
    Valasek J; Doebbler J; Tandale MD; Meade AJ
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):1014-20. PubMed ID: 18632393
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Biological arm motion through reinforcement learning.
    Izawa J; Kondo T; Ito K
    Biol Cybern; 2004 Jul; 91(1):10-22. PubMed ID: 15309543
    [TBL] [Abstract][Full Text] [Related]  

  • 12. GA-based fuzzy reinforcement learning for control of a magnetic bearing system.
    Lin CT; Jou CP
    IEEE Trans Syst Man Cybern B Cybern; 2000; 30(2):276-89. PubMed ID: 18244754
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Computation of inverse functions in a model of cerebellar and reflex pathways allows to control a mobile mechanical segment.
    Ebadzadeh M; Tondu B; Darlot C
    Neuroscience; 2005; 133(1):29-49. PubMed ID: 15893629
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Early motor development from partially ordered neural-body dynamics: experiments with a cortico-spinal-musculo-skeletal model.
    Kuniyoshi Y; Sangawa S
    Biol Cybern; 2006 Dec; 95(6):589-605. PubMed ID: 17123097
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A parameter control method in reinforcement learning to rapidly follow unexpected environmental changes.
    Murakoshi K; Mizuno J
    Biosystems; 2004 Nov; 77(1-3):109-17. PubMed ID: 15527950
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Neural network approach to continuous-time direct adaptive optimal control for partially unknown nonlinear systems.
    Vrabie D; Lewis F
    Neural Netw; 2009 Apr; 22(3):237-46. PubMed ID: 19362449
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Ensemble algorithms in reinforcement learning.
    Wiering MA; van Hasselt H
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):930-6. PubMed ID: 18632380
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Learning and generation of goal-directed arm reaching from scratch.
    Kambara H; Kim K; Shin D; Sato M; Koike Y
    Neural Netw; 2009 May; 22(4):348-61. PubMed ID: 19121565
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reinforcement-learning-based output-feedback control of nonstrict nonlinear discrete-time systems with application to engine emission control.
    Shih P; Kaul BC; Jagannathan S; Drallmeier JA
    IEEE Trans Syst Man Cybern B Cybern; 2009 Oct; 39(5):1162-79. PubMed ID: 19336317
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Reliability of internal prediction/estimation and its application. I. Adaptive action selection reflecting reliability of value function.
    Sakaguchi Y; Takano M
    Neural Netw; 2004 Sep; 17(7):935-52. PubMed ID: 15312837
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 13.