These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

195 related articles for article (PubMed ID: 33631609)

  • 1. Compositional memory in attractor neural networks with one-step learning.
    Davis GP; Katz GE; Gentili RJ; Reggia JA
    Neural Netw; 2021 Jun; 138():78-97. PubMed ID: 33631609
    [TBL] [Abstract][Full Text] [Related]  

  • 2. NeuroLISP: High-level symbolic programming with attractor neural networks.
    Davis GP; Katz GE; Gentili RJ; Reggia JA
    Neural Netw; 2022 Feb; 146():200-219. PubMed ID: 34894482
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A programmable neural virtual machine based on a fast store-erase learning rule.
    Katz GE; Davis GP; Gentili RJ; Reggia JA
    Neural Netw; 2019 Nov; 119():10-30. PubMed ID: 31376635
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Network capacity analysis for latent attractor computation.
    Doboli S; Minai AA
    Network; 2003 May; 14(2):273-302. PubMed ID: 12790185
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Engineering neural systems for high-level problem solving.
    Sylvester J; Reggia J
    Neural Netw; 2016 Jul; 79():37-52. PubMed ID: 27101230
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Gated spiking neural network using Iterative Free-Energy Optimization and rank-order coding for structure learning in memory sequences (INFERNO GATE).
    Pitti A; Quoy M; Lavandier C; Boucenna S
    Neural Netw; 2020 Jan; 121():242-258. PubMed ID: 31581065
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.
    Driscoll LN; Shenoy K; Sussillo D
    Nat Neurosci; 2024 Jul; 27(7):1349-1363. PubMed ID: 38982201
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Memory dynamics in attractor networks with saliency weights.
    Tang H; Li H; Yan R
    Neural Comput; 2010 Jul; 22(7):1899-926. PubMed ID: 20235821
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Flexible Working Memory Through Selective Gating and Attentional Tagging.
    Kruijne W; Bohte SM; Roelfsema PR; Olivers CNL
    Neural Comput; 2021 Jan; 33(1):1-40. PubMed ID: 33080159
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Persistent learning signals and working memory without continuous attractors.
    Park IM; Ságodi Á; Sokół PA
    ArXiv; 2023 Aug; ():. PubMed ID: 37664407
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Gaussian attractor network for memory and recognition with experience-dependent learning.
    Hu X; Zhang B
    Neural Comput; 2010 May; 22(5):1333-57. PubMed ID: 20100070
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Human-like systematic generalization through a meta-learning neural network.
    Lake BM; Baroni M
    Nature; 2023 Nov; 623(7985):115-121. PubMed ID: 37880371
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Overparameterized neural networks implement associative memory.
    Radhakrishnan A; Belkin M; Uhler C
    Proc Natl Acad Sci U S A; 2020 Nov; 117(44):27162-27170. PubMed ID: 33067397
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Effect of dilution in asymmetric recurrent neural networks.
    Folli V; Gosti G; Leonetti M; Ruocco G
    Neural Netw; 2018 Aug; 104():50-59. PubMed ID: 29705670
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Multiscale representations of community structures in attractor neural networks.
    Haga T; Fukai T
    PLoS Comput Biol; 2021 Aug; 17(8):e1009296. PubMed ID: 34424901
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks.
    Han D; Doya K; Tani J
    Neural Netw; 2020 Sep; 129():149-162. PubMed ID: 32534378
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Task representations in neural networks trained to perform many cognitive tasks.
    Yang GR; Joglekar MR; Song HF; Newsome WT; Wang XJ
    Nat Neurosci; 2019 Feb; 22(2):297-306. PubMed ID: 30643294
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.
    Fiebig F; Lansner A
    J Neurosci; 2017 Jan; 37(1):83-96. PubMed ID: 28053032
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data.
    Pereira U; Brunel N
    Neuron; 2018 Jul; 99(1):227-238.e4. PubMed ID: 29909997
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Modeling memory: what do we learn from attractor neural networks?
    Brunel N; Nadal JP
    C R Acad Sci III; 1998; 321(2-3):249-52. PubMed ID: 9759349
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.