BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

190 related articles for article (PubMed ID: 34530451)

  • 1. Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.
    Rajakumar A; Rinzel J; Chen ZS
    Neural Comput; 2021 Sep; 33(10):2603-2645. PubMed ID: 34530451
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Considerations in using recurrent neural networks to probe neural dynamics.
    Kao JC
    J Neurophysiol; 2019 Dec; 122(6):2504-2521. PubMed ID: 31619125
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.
    Song HF; Yang GR; Wang XJ
    PLoS Comput Biol; 2016 Feb; 12(2):e1004792. PubMed ID: 26928718
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Training Spiking Neural Networks in the Strong Coupling Regime.
    Kim CM; Chow CC
    Neural Comput; 2021 Apr; 33(5):1199-1233. PubMed ID: 34496392
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences.
    Tino P; Köteles M
    IEEE Trans Neural Netw; 1999; 10(2):284-302. PubMed ID: 18252527
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Excitatory-inhibitory recurrent dynamics produce robust visual grids and stable attractors.
    Zhang X; Long X; Zhang SJ; Chen ZS
    Cell Rep; 2022 Dec; 41(11):111777. PubMed ID: 36516752
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Generalized Recurrent Neural Network accommodating Dynamic Causal Modeling for functional MRI analysis.
    Wang Y; Wang Y; Lui YW
    Neuroimage; 2018 Sep; 178():385-402. PubMed ID: 29782993
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Midbrain Inspired Recurrent Neural Network Model for Robust Change Detection.
    Sawant Y; Kundu JN; Radhakrishnan VB; Sridharan D
    J Neurosci; 2022 Nov; 42(44):8262-8283. PubMed ID: 36123120
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Emergence of belief-like representations through reinforcement learning.
    Hennig JA; Pinto SAR; Yamaguchi T; Linderman SW; Uchida N; Gershman SJ
    bioRxiv; 2023 Apr; ():. PubMed ID: 37066383
    [TBL] [Abstract][Full Text] [Related]  

  • 10. PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.
    Ehrlich DB; Stone JT; Brandfonbrener D; Atanasov A; Murray JD
    eNeuro; 2021; 8(1):. PubMed ID: 33328247
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Functional Implications of Dale's Law in Balanced Neuronal Network Dynamics and Decision Making.
    Barranca VJ; Bhuiyan A; Sundgren M; Xing F
    Front Neurosci; 2022; 16():801847. PubMed ID: 35295091
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Interpreting a recurrent neural network's predictions of ICU mortality risk.
    Ho LV; Aczon M; Ledbetter D; Wetzel R
    J Biomed Inform; 2021 Feb; 114():103672. PubMed ID: 33422663
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Representations of continuous attractors of recurrent neural networks.
    Yu J; Yi Z; Zhang L
    IEEE Trans Neural Netw; 2009 Feb; 20(2):368-72. PubMed ID: 19150791
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Learning to represent continuous variables in heterogeneous neural networks.
    Darshan R; Rivkind A
    Cell Rep; 2022 Apr; 39(1):110612. PubMed ID: 35385721
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.
    Jarne C; Caruso M
    Cogn Neurodyn; 2024 Jun; 18(3):1323-1335. PubMed ID: 38826641
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Emergence of belief-like representations through reinforcement learning.
    Hennig JA; Romero Pinto SA; Yamaguchi T; Linderman SW; Uchida N; Gershman SJ
    PLoS Comput Biol; 2023 Sep; 19(9):e1011067. PubMed ID: 37695776
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Learning continuous chaotic attractors with a reservoir computer.
    Smith LM; Kim JZ; Lu Z; Bassett DS
    Chaos; 2022 Jan; 32(1):011101. PubMed ID: 35105129
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Designing Interpretable Recurrent Neural Networks for Video Reconstruction via Deep Unfolding.
    Luong HV; Joukovsky B; Deligiannis N
    IEEE Trans Image Process; 2021; 30():4099-4113. PubMed ID: 33798083
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Hebbian learning of context in recurrent neural networks.
    Brunel N
    Neural Comput; 1996 Nov; 8(8):1677-710. PubMed ID: 8888613
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Sparse RNNs can support high-capacity classification.
    Turcu D; Abbott LF
    PLoS Comput Biol; 2022 Dec; 18(12):e1010759. PubMed ID: 36516226
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.