These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

147 related articles for article (PubMed ID: 37583230)

  • 1. Statistical mechanics of continual learning: Variational principle and mean-field potential.
    Li C; Huang Z; Zou W; Huang H
    Phys Rev E; 2023 Jul; 108(1-1):014309. PubMed ID: 37583230
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Variational Data-Free Knowledge Distillation for Continual Learning.
    Li X; Wang S; Sun J; Xu Z
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12618-12634. PubMed ID: 37126627
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Variational mean-field theory for training restricted Boltzmann machines with binary synapses.
    Huang H
    Phys Rev E; 2020 Sep; 102(3-1):030301. PubMed ID: 33075982
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Bayesian continual learning
    Skatchkovsky N; Jang H; Simeone O
    Front Comput Neurosci; 2022; 16():1037976. PubMed ID: 36465962
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Return of the normal distribution: Flexible deep continual learning with variational auto-encoders.
    Hong Y; Mundt M; Park S; Uh Y; Byun H
    Neural Netw; 2022 Oct; 154():397-412. PubMed ID: 35944369
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Continual Learning Using Bayesian Neural Networks.
    Li H; Barnaghi P; Enshaeifar S; Ganz F
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):4243-4252. PubMed ID: 32866104
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Continual learning with attentive recurrent neural networks for temporal data classification.
    Yin SY; Huang Y; Chang TY; Chang SF; Tseng VS
    Neural Netw; 2023 Jan; 158():171-187. PubMed ID: 36459884
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Task-Agnostic Continual Learning Using Online Variational Bayes With Fixed-Point Updates.
    Zeno C; Golan I; Hoffer E; Soudry D
    Neural Comput; 2021 Oct; 33(11):3139-3177. PubMed ID: 34474486
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Bio-inspired, task-free continual learning through activity regularization.
    Lässig F; Aceituno PV; Sorbaro M; Grewe BF
    Biol Cybern; 2023 Oct; 117(4-5):345-361. PubMed ID: 37589728
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Memory Recall: A Simple Neural Network Training Framework Against Catastrophic Forgetting.
    Zhang B; Guo Y; Li Y; He Y; Wang H; Dai Q
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2010-2022. PubMed ID: 34339377
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Origin of the computational hardness for learning with binary synapses.
    Huang H; Kabashima Y
    Phys Rev E Stat Nonlin Soft Matter Phys; 2014 Nov; 90(5-1):052813. PubMed ID: 25493840
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies.
    Shen Y; Dasgupta S; Navlakha S
    Neural Comput; 2023 Oct; 35(11):1797-1819. PubMed ID: 37725710
    [TBL] [Abstract][Full Text] [Related]  

  • 14. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Neural sampling machine with stochastic synapse allows brain-like learning and inference.
    Dutta S; Detorakis G; Khanna A; Grisafe B; Neftci E; Datta S
    Nat Commun; 2022 May; 13(1):2571. PubMed ID: 35546144
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Synaptic metaplasticity in binarized neural networks.
    Laborieux A; Ernoult M; Hirtzlin T; Querlioz D
    Nat Commun; 2021 May; 12(1):2549. PubMed ID: 33953183
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Progressive learning: A deep learning framework for continual learning.
    Fayek HM; Cavedon L; Wu HR
    Neural Netw; 2020 Aug; 128():345-357. PubMed ID: 32470799
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Continual Multiview Task Learning via Deep Matrix Factorization.
    Sun G; Cong Y; Zhang Y; Zhao G; Fu Y
    IEEE Trans Neural Netw Learn Syst; 2021 Jan; 32(1):139-150. PubMed ID: 32175877
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A Continual Learning Survey: Defying Forgetting in Classification Tasks.
    De Lange M; Aljundi R; Masana M; Parisot S; Jia X; Leonardis A; Slabaugh G; Tuytelaars T
    IEEE Trans Pattern Anal Mach Intell; 2022 Jul; 44(7):3366-3385. PubMed ID: 33544669
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.