These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
105 related articles for article (PubMed ID: 17324068)
1. Biological implementation of the temporal difference algorithm for reinforcement learning: theoretical comment on O'Reilly et al. (2007). Houk JC Behav Neurosci; 2007 Feb; 121(1):231-2. PubMed ID: 17324068 [TBL] [Abstract][Full Text] [Related]
2. Simulation of rat behavior by a reinforcement learning algorithm in consideration of appearance probabilities of reinforcement signals. Murakoshi K; Noguchi T Biosystems; 2005 Apr; 80(1):83-90. PubMed ID: 15740837 [TBL] [Abstract][Full Text] [Related]
3. An implementation of reinforcement learning based on spike timing dependent plasticity. Roberts PD; Santiago RA; Lafferriere G Biol Cybern; 2008 Dec; 99(6):517-23. PubMed ID: 18941775 [TBL] [Abstract][Full Text] [Related]
4. On the asymptotic equivalence between differential Hebbian and temporal difference learning. Kolodziejski C; Porr B; Wörgötter F Neural Comput; 2009 Apr; 21(4):1173-202. PubMed ID: 19018698 [TBL] [Abstract][Full Text] [Related]
5. PVLV: the primary value and learned value Pavlovian learning algorithm. O'Reilly RC; Frank MJ; Hazy TE; Watz B Behav Neurosci; 2007 Feb; 121(1):31-49. PubMed ID: 17324049 [TBL] [Abstract][Full Text] [Related]
6. [Mathematical models of decision making and learning]. Ito M; Doya K Brain Nerve; 2008 Jul; 60(7):791-8. PubMed ID: 18646619 [TBL] [Abstract][Full Text] [Related]
7. Short-term memory traces for action bias in human reinforcement learning. Bogacz R; McClure SM; Li J; Cohen JD; Montague PR Brain Res; 2007 Jun; 1153():111-21. PubMed ID: 17459346 [TBL] [Abstract][Full Text] [Related]
8. A spiking neural network model of an actor-critic learning agent. Potjans W; Morrison A; Diesmann M Neural Comput; 2009 Feb; 21(2):301-39. PubMed ID: 19196231 [TBL] [Abstract][Full Text] [Related]
9. Efficient reinforcement learning: computational theories, neuroscience and robotics. Kawato M; Samejima K Curr Opin Neurobiol; 2007 Apr; 17(2):205-12. PubMed ID: 17374483 [TBL] [Abstract][Full Text] [Related]
10. Reinforcement learning in continuous time and space: interference and not ill conditioning is the main problem when using distributed function approximators. Baddeley B IEEE Trans Syst Man Cybern B Cybern; 2008 Aug; 38(4):950-6. PubMed ID: 18632383 [TBL] [Abstract][Full Text] [Related]
11. Adaptive learning via selectionism and Bayesianism, Part I: connection between the two. Zhang J Neural Netw; 2009 Apr; 22(3):220-8. PubMed ID: 19386469 [TBL] [Abstract][Full Text] [Related]
17. Hebbian errors in learning: an analysis using the Oja model. Rădulescu A; Cox K; Adams P J Theor Biol; 2009 Jun; 258(4):489-501. PubMed ID: 19248792 [TBL] [Abstract][Full Text] [Related]
18. Biological arm motion through reinforcement learning. Izawa J; Kondo T; Ito K Biol Cybern; 2004 Jul; 91(1):10-22. PubMed ID: 15309543 [TBL] [Abstract][Full Text] [Related]
19. Modeling the sub-cellular signaling pathways involved in reinforcement learning at the striatum. Wanjerkhede SM; Bapi RS Prog Brain Res; 2008; 168():193-206. PubMed ID: 18166396 [TBL] [Abstract][Full Text] [Related]
20. Adaptive properties of differential learning rates for positive and negative outcomes. Cazé RD; van der Meer MA Biol Cybern; 2013 Dec; 107(6):711-9. PubMed ID: 24085507 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]