140 related articles for article (PubMed ID: 37223467)
1. Dynamics in Deep Classifiers Trained with the Square Loss: Normalization, Low Rank, Neural Collapse, and Generalization Bounds.
Xu M; Rangamani A; Liao Q; Galanti T; Poggio T
Research (Wash D C); 2023; 6():0024. PubMed ID: 37223467
[TBL] [Abstract][Full Text] [Related]
2. Theoretical issues in deep networks.
Poggio T; Banburski A; Liao Q
Proc Natl Acad Sci U S A; 2020 Dec; 117(48):30039-30045. PubMed ID: 32518109
[TBL] [Abstract][Full Text] [Related]
3. Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers.
Paquin AL; Chaib-Draa B; Giguère P
Neural Netw; 2023 Jul; 164():382-394. PubMed ID: 37167751
[TBL] [Abstract][Full Text] [Related]
4. High-dimensional dynamics of generalization error in neural networks.
Advani MS; Saxe AM; Sompolinsky H
Neural Netw; 2020 Dec; 132():428-446. PubMed ID: 33022471
[TBL] [Abstract][Full Text] [Related]
5. Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.
Zhang J; Liu T; Tao D
IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37585328
[TBL] [Abstract][Full Text] [Related]
6. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
[TBL] [Abstract][Full Text] [Related]
7. Stochastic Mirror Descent on Overparameterized Nonlinear Models.
Azizan N; Lale S; Hassibi B
IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7717-7727. PubMed ID: 34270431
[TBL] [Abstract][Full Text] [Related]
8. Convergence of deep convolutional neural networks.
Xu Y; Zhang H
Neural Netw; 2022 Sep; 153():553-563. PubMed ID: 35839599
[TBL] [Abstract][Full Text] [Related]
9. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.
Goldt S; Advani MS; Saxe AM; Krzakala F; Zdeborová L
J Stat Mech; 2020 Dec; 2020(12):124010. PubMed ID: 34262607
[TBL] [Abstract][Full Text] [Related]
10. Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.
Jin P; Lu L; Tang Y; Karniadakis GE
Neural Netw; 2020 Oct; 130():85-99. PubMed ID: 32650153
[TBL] [Abstract][Full Text] [Related]
11. Upper bound of the expected training error of neural network regression for a Gaussian noise sequence.
Hagiwara K; Hayasaka T; Toda N; Usui S; Kuno K
Neural Netw; 2001 Dec; 14(10):1419-29. PubMed ID: 11771721
[TBL] [Abstract][Full Text] [Related]
12. Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme.
Peng X; Wang FY; Li L
IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7910-7920. PubMed ID: 35157598
[TBL] [Abstract][Full Text] [Related]
13. Low-Rank Deep Convolutional Neural Network for Multitask Learning.
Su F; Shang HY; Wang JY
Comput Intell Neurosci; 2019; 2019():7410701. PubMed ID: 31236107
[TBL] [Abstract][Full Text] [Related]
14. Universal mean-field upper bound for the generalization gap of deep neural networks.
Ariosto S; Pacelli R; Ginelli F; Gherardi M; Rotondo P
Phys Rev E; 2022 Jun; 105(6-1):064309. PubMed ID: 35854557
[TBL] [Abstract][Full Text] [Related]
15. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
Liu Y; Zhang M; Zhong Z; Zeng X
Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
[TBL] [Abstract][Full Text] [Related]
16. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
Zang K; Wu W; Luo W
Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
[TBL] [Abstract][Full Text] [Related]
17. Theory of deep convolutional neural networks III: Approximating radial functions.
Mao T; Shi Z; Zhou DX
Neural Netw; 2021 Dec; 144():778-790. PubMed ID: 34688019
[TBL] [Abstract][Full Text] [Related]
18. Orthogonal Deep Neural Networks.
Li S; Jia K; Wen Y; Liu T; Tao D
IEEE Trans Pattern Anal Mach Intell; 2021 Apr; 43(4):1352-1368. PubMed ID: 31634826
[TBL] [Abstract][Full Text] [Related]
19. Relative loss bounds for single neurons.
Helmbold DP; Kivinen J; Warmuth MK
IEEE Trans Neural Netw; 1999; 10(6):1291-304. PubMed ID: 18252631
[TBL] [Abstract][Full Text] [Related]
20. Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang L; Yang Y; Min R; Chakradhar S
Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]