These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

79 related articles for article (PubMed ID: 25291805)

  • 1. Hierarchical Bayesian inverse reinforcement learning.
    Choi J; Kim KE
    IEEE Trans Cybern; 2015 Apr; 45(4):793-805. PubMed ID: 25291805
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Inverse Reinforcement Q-Learning Through Expert Imitation for Discrete-Time Systems.
    Xue W; Lian B; Fan J; Kolaric P; Chai T; Lewis FL
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2386-2399. PubMed ID: 34520364
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning.
    Piot B; Geist M; Pietquin O
    IEEE Trans Neural Netw Learn Syst; 2017 Aug; 28(8):1814-1826. PubMed ID: 27164607
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Inverse Reinforcement Learning for Adversarial Apprentice Games.
    Lian B; Xue W; Lewis FL; Chai T
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4596-4609. PubMed ID: 34623278
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The Unreasonable Effectiveness of Inverse Reinforcement Learning in Advancing Cancer Research.
    Kalantari J; Nelson H; Chia N
    Proc AAAI Conf Artif Intell; 2020 Apr; 34(1):437-445. PubMed ID: 34055465
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Inverse Reinforcement Learning for Trajectory Imitation Using Static Output Feedback Control.
    Xue W; Lian B; Fan J; Chai T; Lewis FL
    IEEE Trans Cybern; 2024 Mar; 54(3):1695-1707. PubMed ID: 37027769
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Robust Inverse Q-Learning for Continuous-Time Linear Systems in Adversarial Environments.
    Lian B; Xue W; Lewis FL; Chai T
    IEEE Trans Cybern; 2022 Dec; 52(12):13083-13095. PubMed ID: 34403352
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Bayesian Approach to Policy Recognition and State Representation Learning.
    Sosic A; Zoubir AM; Koeppl H
    IEEE Trans Pattern Anal Mach Intell; 2018 Jun; 40(6):1295-1308. PubMed ID: 28622668
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Autonomous navigation of catheters and guidewires in mechanical thrombectomy using inverse reinforcement learning.
    Robertshaw H; Karstensen L; Jackson B; Granados A; Booth TC
    Int J Comput Assist Radiol Surg; 2024 Jun; ():. PubMed ID: 38884893
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Evaluation of hierarchical Bayesian method through retinotopic brain activities reconstruction from fMRI and MEG signals.
    Yoshioka T; Toyama K; Kawato M; Yamashita O; Nishina S; Yamagishi N; Sato MA
    Neuroimage; 2008 Oct; 42(4):1397-413. PubMed ID: 18620066
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Scalable Inverse Reinforcement Learning Through Multifidelity Bayesian Optimization.
    Imani M; Ghoreishi SF
    IEEE Trans Neural Netw Learn Syst; 2022 Aug; 33(8):4125-4132. PubMed ID: 33481721
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Generating Reward Functions Using IRL Towards Individualized Cancer Screening.
    Petousis P; Han SX; Hsu W; Bui AAT
    Artif Intell Health (2018); 2019; 11326():213-227. PubMed ID: 31363717
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Online learning of single- and multivalued functions with an infinite mixture of linear experts.
    Damas B; Santos-Victor J
    Neural Comput; 2013 Nov; 25(11):3044-91. PubMed ID: 24001344
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Inter-module credit assignment in modular reinforcement learning.
    Samejima K; Doya K; Kawato M
    Neural Netw; 2003 Sep; 16(7):985-94. PubMed ID: 14692633
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges.
    Zare M; Kebria PM; Khosravi A; Nahavandi S
    IEEE Trans Cybern; 2024 Jul; PP():. PubMed ID: 39024072
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Short-term memory traces for action bias in human reinforcement learning.
    Bogacz R; McClure SM; Li J; Cohen JD; Montague PR
    Brain Res; 2007 Jun; 1153():111-21. PubMed ID: 17459346
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A hierarchical Bayesian method to resolve an inverse problem of MEG contaminated with eye movement artifacts.
    Fujiwara Y; Yamashita O; Kawawaki D; Doya K; Kawato M; Toyama K; Sato MA
    Neuroimage; 2009 Apr; 45(2):393-409. PubMed ID: 19150653
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Predicting Goal-directed Human Attention Using Inverse Reinforcement Learning.
    Yang Z; Huang L; Chen Y; Wei Z; Ahn S; Zelinsky G; Samaras D; Hoai M
    Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit; 2020 Jun; 2020():190-199. PubMed ID: 34163124
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Model-based reinforcement learning under concurrent schedules of reinforcement in rodents.
    Huh N; Jo S; Kim H; Sul JH; Jung MW
    Learn Mem; 2009 May; 16(5):315-23. PubMed ID: 19403794
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Optimal control in microgrid using multi-agent reinforcement learning.
    Li FD; Wu M; He Y; Chen X
    ISA Trans; 2012 Nov; 51(6):743-51. PubMed ID: 22824135
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 4.