These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

239 related articles for article (PubMed ID: 36682991)

  • 1. Continual task learning in natural and artificial agents.
    Flesch T; Saxe A; Summerfield C
    Trends Neurosci; 2023 Mar; 46(3):199-210. PubMed ID: 36682991
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.
    Flesch T; Nagy DG; Saxe A; Summerfield C
    PLoS Comput Biol; 2023 Jan; 19(1):e1010808. PubMed ID: 36656823
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Comparing continual task learning in minds and machines.
    Flesch T; Balaguer J; Dekker R; Nili H; Summerfield C
    Proc Natl Acad Sci U S A; 2018 Oct; 115(44):E10313-E10322. PubMed ID: 30322916
    [TBL] [Abstract][Full Text] [Related]  

  • 4. From lazy to rich to exclusive task representations in neural networks and neural codes.
    Farrell M; Recanatesi S; Shea-Brown E
    Curr Opin Neurobiol; 2023 Dec; 83():102780. PubMed ID: 37757585
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Orthogonal representations for robust context-dependent task performance in brains and neural networks.
    Flesch T; Juechems K; Dumbalska T; Saxe A; Summerfield C
    Neuron; 2022 Apr; 110(7):1258-1270.e11. PubMed ID: 35085492
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization.
    Masse NY; Grant GD; Freedman DJ
    Proc Natl Acad Sci U S A; 2018 Oct; 115(44):E10467-E10475. PubMed ID: 30315147
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks.
    Tadros T; Krishnan GP; Ramyaa R; Bazhenov M
    Nat Commun; 2022 Dec; 13(1):7742. PubMed ID: 36522325
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Analyzing biological and artificial neural networks: challenges with opportunities for synergy?
    Barrett DG; Morcos AS; Macke JH
    Curr Opin Neurobiol; 2019 Apr; 55():55-64. PubMed ID: 30785004
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Abstract representations emerge naturally in neural networks trained to perform multiple tasks.
    Johnston WJ; Fusi S
    Nat Commun; 2023 Feb; 14(1):1040. PubMed ID: 36823136
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Neural population geometry: An approach for understanding biological and artificial neural networks.
    Chung S; Abbott LF
    Curr Opin Neurobiol; 2021 Oct; 70():137-144. PubMed ID: 34801787
    [TBL] [Abstract][Full Text] [Related]  

  • 11. If deep learning is the answer, what is the question?
    Saxe A; Nelli S; Summerfield C
    Nat Rev Neurosci; 2021 Jan; 22(1):55-67. PubMed ID: 33199854
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Signatures of task learning in neural representations.
    Gurnani H; Cayco Gajic NA
    Curr Opin Neurobiol; 2023 Dec; 83():102759. PubMed ID: 37708653
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Triple-Memory Networks: A Brain-Inspired Method for Continual Learning.
    Wang L; Lei B; Li Q; Su H; Zhu J; Zhong Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):1925-1934. PubMed ID: 34529579
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation.
    Peng J; Tang B; Jiang H; Li Z; Lei Y; Lin T; Li H
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4243-4256. PubMed ID: 33577459
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Improving transparency and representational generalizability through parallel continual learning.
    Paknezhad M; Rengarajan H; Yuan C; Suresh S; Gupta M; Ramasamy S; Lee HK
    Neural Netw; 2023 Apr; 161():449-465. PubMed ID: 36805261
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Progressive learning: A deep learning framework for continual learning.
    Fayek HM; Cavedon L; Wu HR
    Neural Netw; 2020 Aug; 128():345-357. PubMed ID: 32470799
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Beneficial Perturbation Network for Designing General Adaptive Artificial Intelligence Systems.
    Wen S; Rios A; Ge Y; Itti L
    IEEE Trans Neural Netw Learn Syst; 2022 Aug; 33(8):3778-3791. PubMed ID: 33596177
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Deep learning encodes robust discriminative neuroimaging representations to outperform standard machine learning.
    Abrol A; Fu Z; Salman M; Silva R; Du Y; Plis S; Calhoun V
    Nat Commun; 2021 Jan; 12(1):353. PubMed ID: 33441557
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The geometry of representational drift in natural and artificial neural networks.
    Aitken K; Garrett M; Olsen S; Mihalas S
    PLoS Comput Biol; 2022 Nov; 18(11):e1010716. PubMed ID: 36441762
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Neural circuits for learning context-dependent associations of stimuli.
    Zhu H; Paschalidis IC; Hasselmo ME
    Neural Netw; 2018 Nov; 107():48-60. PubMed ID: 30177226
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.