These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
118 related articles for article (PubMed ID: 36944240)
21. When does reward maximization lead to matching law? Sakai Y; Fukai T PLoS One; 2008; 3(11):e3795. PubMed ID: 19030101 [TBL] [Abstract][Full Text] [Related]
22. Active inference on discrete state-spaces: A synthesis. Da Costa L; Parr T; Sajid N; Veselic S; Neacsu V; Friston K J Math Psychol; 2020 Dec; 99():102447. PubMed ID: 33343039 [TBL] [Abstract][Full Text] [Related]
23. The anatomy of choice: dopamine and decision-making. Friston K; Schwartenbeck P; FitzGerald T; Moutoussis M; Behrens T; Dolan RJ Philos Trans R Soc Lond B Biol Sci; 2014 Nov; 369(1655):. PubMed ID: 25267823 [TBL] [Abstract][Full Text] [Related]
24. Mice exhibit stochastic and efficient action switching during probabilistic decision making. Beron CC; Neufeld SQ; Linderman SW; Sabatini BL Proc Natl Acad Sci U S A; 2022 Apr; 119(15):e2113961119. PubMed ID: 35385355 [TBL] [Abstract][Full Text] [Related]
25. Reward-dependent learning in neuronal networks for planning and decision making. Dehaene S; Changeux JP Prog Brain Res; 2000; 126():217-29. PubMed ID: 11105649 [TBL] [Abstract][Full Text] [Related]
26. Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates. Skvortsova V; Palminteri S; Pessiglione M J Neurosci; 2014 Nov; 34(47):15621-30. PubMed ID: 25411490 [TBL] [Abstract][Full Text] [Related]
27. A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task. Suri RE; Schultz W Neuroscience; 1999; 91(3):871-90. PubMed ID: 10391468 [TBL] [Abstract][Full Text] [Related]
28. Depressive symptoms enhance loss-minimization, but attenuate gain-maximization in history-dependent decision-making. Maddox WT; Gorlick MA; Worthy DA; Beevers CG Cognition; 2012 Oct; 125(1):118-24. PubMed ID: 22801054 [TBL] [Abstract][Full Text] [Related]
29. Active inference and agency: optimal control without cost functions. Friston K; Samothrakis S; Montague R Biol Cybern; 2012 Oct; 106(8-9):523-41. PubMed ID: 22864468 [TBL] [Abstract][Full Text] [Related]
30. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. Frémaux N; Sprekeler H; Gerstner W PLoS Comput Biol; 2013 Apr; 9(4):e1003024. PubMed ID: 23592970 [TBL] [Abstract][Full Text] [Related]
31. Optimal Inference of Hidden Markov Models Through Expert-Acquired Data. Ravari A; Ghoreishi SF; Imani M IEEE Trans Artif Intell; 2024 Aug; 5(8):3985-4000. PubMed ID: 39144916 [TBL] [Abstract][Full Text] [Related]
32. Dopaminergic Modulation of Human Intertemporal Choice: A Diffusion Model Analysis Using the D2-Receptor Antagonist Haloperidol. Wagner B; Clos M; Sommer T; Peters J J Neurosci; 2020 Oct; 40(41):7936-7948. PubMed ID: 32948675 [TBL] [Abstract][Full Text] [Related]
36. Mice infer probabilistic models for timing. Li Y; Dudman JT Proc Natl Acad Sci U S A; 2013 Oct; 110(42):17154-9. PubMed ID: 24082097 [TBL] [Abstract][Full Text] [Related]
37. Optimal decision making and matching are tied through diminishing returns. Kubanek J Proc Natl Acad Sci U S A; 2017 Aug; 114(32):8499-8504. PubMed ID: 28739920 [TBL] [Abstract][Full Text] [Related]
38. Active Inference, Belief Propagation, and the Bethe Approximation. Schwöbel S; Kiebel S; Marković D Neural Comput; 2018 Sep; 30(9):2530-2567. PubMed ID: 29949461 [TBL] [Abstract][Full Text] [Related]
39. Morphogenesis as Bayesian inference: A variational approach to pattern formation and control in complex biological systems. Kuchling F; Friston K; Georgiev G; Levin M Phys Life Rev; 2020 Jul; 33():88-108. PubMed ID: 31320316 [TBL] [Abstract][Full Text] [Related]
40. An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning. Yao S; Liu X; Zhang Y; Cui Z Math Biosci Eng; 2022 Jun; 19(9):9258-9290. PubMed ID: 35942758 [TBL] [Abstract][Full Text] [Related] [Previous] [Next] [New Search]