BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

137 related articles for article (PubMed ID: 35939629)

  • 1. Beyond Drift Diffusion Models: Fitting a Broad Class of Decision and Reinforcement Learning Models with HDDM.
    Fengler A; Bera K; Pedersen ML; Frank MJ
    J Cogn Neurosci; 2022 Sep; 34(10):1780-1805. PubMed ID: 35939629
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Simultaneous Hierarchical Bayesian Parameter Estimation for Reinforcement Learning and Drift Diffusion Models: a Tutorial and Links to Neural Data.
    Pedersen ML; Frank MJ
    Comput Brain Behav; 2020 Dec; 3(4):458-471. PubMed ID: 35128308
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience.
    Fengler A; Govindarajan LN; Chen T; Frank MJ
    Elife; 2021 Apr; 10():. PubMed ID: 33821788
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The drift diffusion model as the choice rule in reinforcement learning.
    Pedersen ML; Frank MJ; Biele G
    Psychon Bull Rev; 2017 Aug; 24(4):1234-1251. PubMed ID: 27966103
    [TBL] [Abstract][Full Text] [Related]  

  • 5. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
    Wiecki TV; Sofer I; Frank MJ
    Front Neuroinform; 2013; 7():14. PubMed ID: 23935581
    [TBL] [Abstract][Full Text] [Related]  

  • 6. The drift diffusion model as the choice rule in inter-temporal and risky choice: A case study in medial orbitofrontal cortex lesion patients and controls.
    Peters J; D'Esposito M
    PLoS Comput Biol; 2020 Apr; 16(4):e1007615. PubMed ID: 32310962
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Dopaminergic Modulation of Human Intertemporal Choice: A Diffusion Model Analysis Using the D2-Receptor Antagonist Haloperidol.
    Wagner B; Clos M; Sommer T; Peters J
    J Neurosci; 2020 Oct; 40(41):7936-7948. PubMed ID: 32948675
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A reinforcement learning diffusion decision model for value-based decisions.
    Fontanesi L; Gluth S; Spektor MS; Rieskamp J
    Psychon Bull Rev; 2019 Aug; 26(4):1099-1121. PubMed ID: 30924057
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Mutual benefits: Combining reinforcement learning with sequential sampling models.
    Miletić S; Boag RJ; Forstmann BU
    Neuropsychologia; 2020 Jan; 136():107261. PubMed ID: 31733237
    [TBL] [Abstract][Full Text] [Related]  

  • 10. The empirical status of predictive coding and active inference.
    Hodson R; Mehta M; Smith R
    Neurosci Biobehav Rev; 2024 Feb; 157():105473. PubMed ID: 38030100
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Computational approaches to modeling gambling behaviour: Opportunities for understanding disordered gambling.
    Hales CA; Clark L; Winstanley CA
    Neurosci Biobehav Rev; 2023 Apr; 147():105083. PubMed ID: 36758827
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A new model of decision processing in instrumental learning tasks.
    Miletić S; Boag RJ; Trutti AC; Stevenson N; Forstmann BU; Heathcote A
    Elife; 2021 Jan; 10():. PubMed ID: 33501916
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices.
    Zhang L; Lengersdorff L; Mikus N; Gläscher J; Lamm C
    Soc Cogn Affect Neurosci; 2020 Jul; 15(6):695-707. PubMed ID: 32608484
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Integrated Bayesian models of learning and decision making for saccadic eye movements.
    Brodersen KH; Penny WD; Harrison LM; Daunizeau J; Ruff CC; Duzel E; Friston KJ; Stephan KE
    Neural Netw; 2008 Nov; 21(9):1247-60. PubMed ID: 18835129
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Bayesian analysis of the piecewise diffusion decision model.
    Holmes WR; Trueblood JS
    Behav Res Methods; 2018 Apr; 50(2):730-743. PubMed ID: 28597236
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Using Drift Diffusion and RL Models to Disentangle Effects of Depression On Decision-Making vs. Learning in the Probabilistic Reward Task.
    Dillon DG; Belleau EL; Origlio J; McKee M; Jahan A; Meyer A; Souther MK; Brunner D; Kuhn M; Ang YS; Cusin C; Fava M; Pizzagalli DA
    Comput Psychiatr; 2024; 8(1):46-69. PubMed ID: 38774430
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Autonomic responses to choice outcomes: Links to task performance and reinforcement-learning parameters.
    Hayes WM; Wedell DH
    Biol Psychol; 2020 Oct; 156():107968. PubMed ID: 33027684
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Multi-factor analysis in language production: Sequential sampling models mimic and extend regression results.
    Anders R; Van Maanen L; Alario FX
    Cogn Neuropsychol; 2019; 36(5-6):234-264. PubMed ID: 31076011
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Active inference and the two-step task.
    Gijsen S; Grundei M; Blankenburg F
    Sci Rep; 2022 Oct; 12(1):17682. PubMed ID: 36271279
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A comparison of conflict diffusion models in the flanker task through pseudolikelihood Bayes factors.
    Evans NJ; Servant M
    Psychol Rev; 2020 Jan; 127(1):114-135. PubMed ID: 31599635
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.