These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

238 related articles for article (PubMed ID: 36016062)

  • 1. Model-Based Reinforcement Learning with Automated Planning for Network Management.
    Ordonez A; Caicedo OM; Villota W; Rodriguez-Vivas A; da Fonseca NLS
    Sensors (Basel); 2022 Aug; 22(16):. PubMed ID: 36016062
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Continuous action deep reinforcement learning for propofol dosing during general anesthesia.
    Schamberg G; Badgeley M; Meschede-Krasa B; Kwon O; Brown EN
    Artif Intell Med; 2022 Jan; 123():102227. PubMed ID: 34998516
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Neuro-Inspired Reinforcement Learning to Improve Trajectory Prediction in Reward-Guided Behavior.
    Chen BW; Yang SH; Kuo CH; Chen JW; Lo YC; Kuo YT; Lin YC; Chang HC; Lin SH; Yu X; Qu B; Ro SV; Lai HY; Chen YY
    Int J Neural Syst; 2022 Sep; 32(9):2250038. PubMed ID: 35989578
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.
    Nowakowski K; Carvalho P; Six JB; Maillet Y; Nguyen AT; Seghiri I; M'Pemba L; Marcille T; Ngo ST; Dao TT
    Med Biol Eng Comput; 2021 Jan; 59(1):243-256. PubMed ID: 33417125
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Integration of Reinforcement Learning in a Virtual Robotic Surgical Simulation.
    Bourdillon AT; Garg A; Wang H; Woo YJ; Pavone M; Boyd J
    Surg Innov; 2023 Feb; 30(1):94-102. PubMed ID: 35503302
    [No Abstract]   [Full Text] [Related]  

  • 6. Asymmetric and adaptive reward coding via normalized reinforcement learning.
    Louie K
    PLoS Comput Biol; 2022 Jul; 18(7):e1010350. PubMed ID: 35862443
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Probing relationships between reinforcement learning and simple behavioral strategies to understand probabilistic reward learning.
    Iyer ES; Kairiss MA; Liu A; Otto AR; Bagot RC
    J Neurosci Methods; 2020 Jul; 341():108777. PubMed ID: 32417532
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Forward and inverse reinforcement learning sharing network weights and hyperparameters.
    Uchibe E; Doya K
    Neural Netw; 2021 Dec; 144():138-153. PubMed ID: 34492548
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Modeling Bellman-error with logistic distribution with applications in reinforcement learning.
    Lv O; Zhou B; Yang LF
    Neural Netw; 2024 Sep; 177():106387. PubMed ID: 38788292
    [TBL] [Abstract][Full Text] [Related]  

  • 10. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis.
    Collins AG; Frank MJ
    Eur J Neurosci; 2012 Apr; 35(7):1024-35. PubMed ID: 22487033
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Application of Deep Reinforcement Learning to NS-SHAFT Game Signal Control.
    Chang CL; Chen ST; Lin PY; Chang CY
    Sensors (Basel); 2022 Jul; 22(14):. PubMed ID: 35890943
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Multiple memory systems as substrates for multiple decision systems.
    Doll BB; Shohamy D; Daw ND
    Neurobiol Learn Mem; 2015 Jan; 117():4-13. PubMed ID: 24846190
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.
    Zsuga J; Biro K; Papp C; Tajti G; Gesztelyi R
    Behav Neurosci; 2016 Feb; 130(1):6-18. PubMed ID: 26795580
    [TBL] [Abstract][Full Text] [Related]  

  • 14. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning.
    Gläscher J; Daw N; Dayan P; O'Doherty JP
    Neuron; 2010 May; 66(4):585-95. PubMed ID: 20510862
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Conformer-RL: A deep reinforcement learning library for conformer generation.
    Jiang R; Gogineni T; Kammeraad J; He Y; Tewari A; Zimmerman PM
    J Comput Chem; 2022 Oct; 43(27):1880-1886. PubMed ID: 36000759
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks.
    Triche A; Maida AS; Kumar A
    Neural Netw; 2022 Jul; 151():16-33. PubMed ID: 35367735
    [TBL] [Abstract][Full Text] [Related]  

  • 17. What matters in reinforcement learning for tractography.
    Théberge A; Desrosiers C; Boré A; Descoteaux M; Jodoin PM
    Med Image Anal; 2024 Apr; 93():103085. PubMed ID: 38219499
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Combining STDP and binary networks for reinforcement learning from images and sparse rewards.
    Chevtchenko SF; Ludermir TB
    Neural Netw; 2021 Dec; 144():496-506. PubMed ID: 34601362
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Is Deep Reinforcement Learning Ready for Practical Applications in Healthcare? A Sensitivity Analysis of Duel-DDQN for Hemodynamic Management in Sepsis Patients.
    Lu M; Shahn Z; Sow D; Doshi-Velez F; Lehman LH
    AMIA Annu Symp Proc; 2020; 2020():773-782. PubMed ID: 33936452
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Frontoparietal network activity during model-based reinforcement learning updates is reduced among adolescents with severe sexual abuse.
    Letkiewicz AM; Cochran AL; Cisler JM
    J Psychiatr Res; 2022 Jan; 145():256-262. PubMed ID: 33199053
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.