These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
2. Learning Generative State Space Models for Active Inference. Çatal O; Wauthier S; De Boom C; Verbelen T; Dhoedt B Front Comput Neurosci; 2020; 14():574372. PubMed ID: 33304260 [TBL] [Abstract][Full Text] [Related]
3. Sophisticated Inference. Friston K; Da Costa L; Hafner D; Hesp C; Parr T Neural Comput; 2021 Mar; 33(3):713-763. PubMed ID: 33626312 [TBL] [Abstract][Full Text] [Related]
4. Reward Maximization Through Discrete Active Inference. Da Costa L; Sajid N; Parr T; Friston K; Smith R Neural Comput; 2023 Apr; 35(5):807-852. PubMed ID: 36944240 [TBL] [Abstract][Full Text] [Related]
5. Active Inference in OpenAI Gym: A Paradigm for Computational Investigations Into Psychiatric Illness. Cullen M; Davey B; Friston KJ; Moran RJ Biol Psychiatry Cogn Neurosci Neuroimaging; 2018 Sep; 3(9):809-818. PubMed ID: 30082215 [TBL] [Abstract][Full Text] [Related]
6. Active inference and learning. Friston K; FitzGerald T; Rigoli F; Schwartenbeck P; O Doherty J; Pezzulo G Neurosci Biobehav Rev; 2016 Sep; 68():862-879. PubMed ID: 27375276 [TBL] [Abstract][Full Text] [Related]
8. Posterior weighted reinforcement learning with state uncertainty. Larsen T; Leslie DS; Collins EJ; Bogacz R Neural Comput; 2010 May; 22(5):1149-79. PubMed ID: 20100078 [TBL] [Abstract][Full Text] [Related]
9. Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop. Biehl M; Guckelsberger C; Salge C; Smith SC; Polani D Front Neurorobot; 2018; 12():45. PubMed ID: 30214404 [TBL] [Abstract][Full Text] [Related]
10. Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning. Bai C; Liu P; Liu K; Wang L; Zhao Y; Han L; Wang Z IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4776-4790. PubMed ID: 34851835 [TBL] [Abstract][Full Text] [Related]
11. State anxiety biases estimates of uncertainty and impairs reward learning in volatile environments. Hein TP; de Fockert J; Ruiz MH Neuroimage; 2021 Jan; 224():117424. PubMed ID: 33035670 [TBL] [Abstract][Full Text] [Related]
12. Deep Active Inference and Scene Construction. Heins RC; Mirza MB; Parr T; Friston K; Kagan I; Pooresmaeili A Front Artif Intell; 2020; 3():509354. PubMed ID: 33733195 [TBL] [Abstract][Full Text] [Related]
13. Implicit Posteriori Parameter Distribution Optimization in Reinforcement Learning. Li T; Yang G; Chu J IEEE Trans Cybern; 2024 May; 54(5):3051-3064. PubMed ID: 37030741 [TBL] [Abstract][Full Text] [Related]
14. Signal Novelty Detection as an Intrinsic Reward for Robotics. Kubovčík M; Dirgová Luptáková I; Pospíchal J Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112324 [TBL] [Abstract][Full Text] [Related]
15. Active inference on discrete state-spaces: A synthesis. Da Costa L; Parr T; Sajid N; Veselic S; Neacsu V; Friston K J Math Psychol; 2020 Dec; 99():102447. PubMed ID: 33343039 [TBL] [Abstract][Full Text] [Related]
16. Weak Human Preference Supervision for Deep Reinforcement Learning. Cao Z; Wong K; Lin CT IEEE Trans Neural Netw Learn Syst; 2021 Dec; 32(12):5369-5378. PubMed ID: 34101604 [TBL] [Abstract][Full Text] [Related]
17. Belief state representation in the dopamine system. Babayan BM; Uchida N; Gershman SJ Nat Commun; 2018 May; 9(1):1891. PubMed ID: 29760401 [TBL] [Abstract][Full Text] [Related]
18. Realizing Active Inference in Variational Message Passing: The Outcome-Blind Certainty Seeker. Champion T; Grześ M; Bowman H Neural Comput; 2021 Sep; 33(10):2762-2826. PubMed ID: 34280302 [TBL] [Abstract][Full Text] [Related]