These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

211 related articles for article (PubMed ID: 31689680)

  • 1. A complementary learning systems approach to temporal difference learning.
    Blakeman S; Mareschal D
    Neural Netw; 2020 Feb; 122():218-230. PubMed ID: 31689680
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture.
    Ladosz P; Ben-Iwhiwhu E; Dick J; Ketz N; Kolouri S; Krichmar JL; Pilly PK; Soltoggio A
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2045-2056. PubMed ID: 34559664
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Neural learning rules for generating flexible predictions and computing the successor representation.
    Fang C; Aronov D; Abbott LF; Mackevicius EL
    Elife; 2023 Mar; 12():. PubMed ID: 36928104
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Selective particle attention: Rapidly and flexibly selecting features for deep reinforcement learning.
    Blakeman S; Mareschal D
    Neural Netw; 2022 Jun; 150():408-421. PubMed ID: 35358888
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Human-level control through deep reinforcement learning.
    Mnih V; Kavukcuoglu K; Silver D; Rusu AA; Veness J; Bellemare MG; Graves A; Riedmiller M; Fidjeland AK; Ostrovski G; Petersen S; Beattie C; Sadik A; Antonoglou I; King H; Kumaran D; Wierstra D; Legg S; Hassabis D
    Nature; 2015 Feb; 518(7540):529-33. PubMed ID: 25719670
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Deep reinforcement learning for automated radiation adaptation in lung cancer.
    Tseng HH; Luo Y; Cui S; Chien JT; Ten Haken RK; Naqa IE
    Med Phys; 2017 Dec; 44(12):6690-6705. PubMed ID: 29034482
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Combining STDP and binary networks for reinforcement learning from images and sparse rewards.
    Chevtchenko SF; Ludermir TB
    Neural Netw; 2021 Dec; 144():496-506. PubMed ID: 34601362
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Multisource Transfer Double DQN Based on Actor Learning.
    Pan J; Wang X; Cheng Y; Yu Q; Jie Pan ; Xuesong Wang ; Yuhu Cheng ; Qiang Yu ; Yu Q; Cheng Y; Pan J; Wang X
    IEEE Trans Neural Netw Learn Syst; 2018 Jun; 29(6):2227-2238. PubMed ID: 29771674
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Temporal-difference reinforcement learning with distributed representations.
    Kurth-Nelson Z; Redish AD
    PLoS One; 2009 Oct; 4(10):e7362. PubMed ID: 19841749
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A complementary learning approach for expertise transference of human-optimized controllers.
    Perrusquía A
    Neural Netw; 2022 Jan; 145():33-41. PubMed ID: 34715533
    [TBL] [Abstract][Full Text] [Related]  

  • 11. What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.
    Kumaran D; Hassabis D; McClelland JL
    Trends Cogn Sci; 2016 Jul; 20(7):512-534. PubMed ID: 27315762
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Application of Deep Reinforcement Learning to NS-SHAFT Game Signal Control.
    Chang CL; Chen ST; Lin PY; Chang CY
    Sensors (Basel); 2022 Jul; 22(14):. PubMed ID: 35890943
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Reinforcement learning in continuous time and space: interference and not ill conditioning is the main problem when using distributed function approximators.
    Baddeley B
    IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):950-6. PubMed ID: 18632383
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Integrating temporal difference methods and self-organizing neural networks for reinforcement learning with delayed evaluative feedback.
    Tan AH; Lu N; Xiao D
    IEEE Trans Neural Netw; 2008 Feb; 19(2):230-44. PubMed ID: 18269955
    [TBL] [Abstract][Full Text] [Related]  

  • 15. DynMat, a network that can learn after learning.
    Lee JH
    Neural Netw; 2019 Aug; 116():88-100. PubMed ID: 31015043
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Reinforcement Learning, Fast and Slow.
    Botvinick M; Ritter S; Wang JX; Kurth-Nelson Z; Blundell C; Hassabis D
    Trends Cogn Sci; 2019 May; 23(5):408-422. PubMed ID: 31003893
    [TBL] [Abstract][Full Text] [Related]  

  • 17. ToyArchitecture: Unsupervised learning of interpretable models of the environment.
    Vítků J; Dluhoš P; Davidson J; Nikl M; Andersson S; Paška P; Šinkora J; Hlubuček P; Stránský M; Hyben M; Poliak M; Feyereisl J; Rosa M
    PLoS One; 2020; 15(5):e0230432. PubMed ID: 32421693
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory.
    McClelland JL
    J Exp Psychol Gen; 2013 Nov; 142(4):1190-1210. PubMed ID: 23978185
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Epistemic Autonomy: Self-supervised Learning in the Mammalian Hippocampus.
    Santos-Pata D; Amil AF; Raikov IG; Rennó-Costa C; Mura A; Soltesz I; Verschure PFMJ
    Trends Cogn Sci; 2021 Jul; 25(7):582-595. PubMed ID: 33906817
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Hippocampal Contribution to Probabilistic Feedback Learning: Modeling Observation- and Reinforcement-based Processes.
    Patt VM; Palombo DJ; Esterman M; Verfaellie M
    J Cogn Neurosci; 2022 Jul; 34(8):1429-1446. PubMed ID: 35604353
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.