These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
147 related articles for article (PubMed ID: 37583230)
1. Statistical mechanics of continual learning: Variational principle and mean-field potential. Li C; Huang Z; Zou W; Huang H Phys Rev E; 2023 Jul; 108(1-1):014309. PubMed ID: 37583230 [TBL] [Abstract][Full Text] [Related]
2. Variational Data-Free Knowledge Distillation for Continual Learning. Li X; Wang S; Sun J; Xu Z IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12618-12634. PubMed ID: 37126627 [TBL] [Abstract][Full Text] [Related]
3. Variational mean-field theory for training restricted Boltzmann machines with binary synapses. Huang H Phys Rev E; 2020 Sep; 102(3-1):030301. PubMed ID: 33075982 [TBL] [Abstract][Full Text] [Related]
4. Bayesian continual learning Skatchkovsky N; Jang H; Simeone O Front Comput Neurosci; 2022; 16():1037976. PubMed ID: 36465962 [TBL] [Abstract][Full Text] [Related]
5. Return of the normal distribution: Flexible deep continual learning with variational auto-encoders. Hong Y; Mundt M; Park S; Uh Y; Byun H Neural Netw; 2022 Oct; 154():397-412. PubMed ID: 35944369 [TBL] [Abstract][Full Text] [Related]
6. Continual Learning Using Bayesian Neural Networks. Li H; Barnaghi P; Enshaeifar S; Ganz F IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):4243-4252. PubMed ID: 32866104 [TBL] [Abstract][Full Text] [Related]
7. Continual learning with attentive recurrent neural networks for temporal data classification. Yin SY; Huang Y; Chang TY; Chang SF; Tseng VS Neural Netw; 2023 Jan; 158():171-187. PubMed ID: 36459884 [TBL] [Abstract][Full Text] [Related]
10. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Auer P; Burgsteiner H; Maass W Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524 [TBL] [Abstract][Full Text] [Related]
11. Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting. Zhang B; Guo Y; Li Y; He Y; Wang H; Dai Q IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2010-2022. PubMed ID: 34339377 [TBL] [Abstract][Full Text] [Related]
12. Origin of the computational hardness for learning with binary synapses. Huang H; Kabashima Y Phys Rev E Stat Nonlin Soft Matter Phys; 2014 Nov; 90(5-1):052813. PubMed ID: 25493840 [TBL] [Abstract][Full Text] [Related]
13. Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies. Shen Y; Dasgupta S; Navlakha S Neural Comput; 2023 Oct; 35(11):1797-1819. PubMed ID: 37725710 [TBL] [Abstract][Full Text] [Related]
14. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework. Deng L; Jiao P; Pei J; Wu Z; Li G Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195 [TBL] [Abstract][Full Text] [Related]