These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

174 related articles for article (PubMed ID: 31871189)

  • 1. Shaping the learning landscape in neural networks around wide flat minima.
    Baldassi C; Pittorino F; Zecchina R
    Proc Natl Acad Sci U S A; 2020 Jan; 117(1):161-170. PubMed ID: 31871189
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Typical and atypical solutions in nonconvex neural networks with discrete and continuous weights.
    Baldassi C; Malatesta EM; Perugini G; Zecchina R
    Phys Rev E; 2023 Aug; 108(2-1):024310. PubMed ID: 37723812
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima.
    Feng Y; Tu Y
    Proc Natl Acad Sci U S A; 2021 Mar; 118(9):. PubMed ID: 33619091
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Learning through atypical phase transitions in overparameterized neural networks.
    Baldassi C; Lauditi C; Malatesta EM; Pacelli R; Perugini G; Zecchina R
    Phys Rev E; 2022 Jul; 106(1-1):014116. PubMed ID: 35974501
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Unveiling the Structure of Wide Flat Minima in Neural Networks.
    Baldassi C; Lauditi C; Malatesta EM; Perugini G; Zecchina R
    Phys Rev Lett; 2021 Dec; 127(27):278301. PubMed ID: 35061428
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes.
    Baldassi C; Borgs C; Chayes JT; Ingrosso A; Lucibello C; Saglietti L; Zecchina R
    Proc Natl Acad Sci U S A; 2016 Nov; 113(48):E7655-E7662. PubMed ID: 27856745
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses.
    Frye CG; Simon J; Wadia NS; Ligeralde A; DeWeese MR; Bouchard KE
    Neural Comput; 2021 May; 33(6):1469-1497. PubMed ID: 34496389
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Geometry of Energy Landscapes and the Optimizability of Deep Neural Networks.
    Becker S; Zhang Y; Lee AA
    Phys Rev Lett; 2020 Mar; 124(10):108301. PubMed ID: 32216422
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Learning smooth dendrite morphological neurons by stochastic gradient descent for pattern classification.
    Gómez-Flores W; Sossa H
    Neural Netw; 2023 Nov; 168():665-676. PubMed ID: 37857137
    [TBL] [Abstract][Full Text] [Related]  

  • 14. High-dimensional dynamics of generalization error in neural networks.
    Advani MS; Saxe AM; Sompolinsky H
    Neural Netw; 2020 Dec; 132():428-446. PubMed ID: 33022471
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Structure of the space of folding protein sequences defined by large language models.
    Zambon A; Zecchina R; Tiana G
    Phys Biol; 2024 Jan; 21(2):. PubMed ID: 38237200
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Archetypal landscapes for deep neural networks.
    Verpoort PC; Lee AA; Wales DJ
    Proc Natl Acad Sci U S A; 2020 Sep; 117(36):21857-21864. PubMed ID: 32843349
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On the problem of local minima in recurrent neural networks.
    Bianchini M; Gori M; Maggini M
    IEEE Trans Neural Netw; 1994; 5(2):167-77. PubMed ID: 18267788
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Piecewise convexity of artificial neural networks.
    Rister B; Rubin DL
    Neural Netw; 2017 Oct; 94():34-45. PubMed ID: 28732233
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.
    Zhang J; Liu T; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37585328
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.