These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

183 related articles for article (PubMed ID: 32518109)

  • 21. Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning.
    Kawaguchi K; Huang J; Kaelbling LP
    Neural Comput; 2019 Dec; 31(12):2293-2323. PubMed ID: 31614105
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Generalization and Expressivity for Deep Nets.
    Lin SB
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1392-1406. PubMed ID: 30281491
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks.
    Canatar A; Bordelon B; Pehlevan C
    Nat Commun; 2021 May; 12(1):2914. PubMed ID: 34006842
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Shaping the learning landscape in neural networks around wide flat minima.
    Baldassi C; Pittorino F; Zecchina R
    Proc Natl Acad Sci U S A; 2020 Jan; 117(1):161-170. PubMed ID: 31871189
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.
    Sum J; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2619-2632. PubMed ID: 34487503
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.
    Zhang J; Liu T; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37585328
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.
    Gorban AN; Mirkes EM; Zinovyev A
    Neural Netw; 2016 Dec; 84():28-38. PubMed ID: 27639721
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Overparameterized neural networks implement associative memory.
    Radhakrishnan A; Belkin M; Uhler C
    Proc Natl Acad Sci U S A; 2020 Nov; 117(44):27162-27170. PubMed ID: 33067397
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Analytic Function Approximation by Path-Norm-Regularized Deep Neural Networks.
    Beknazaryan A
    Entropy (Basel); 2022 Aug; 24(8):. PubMed ID: 36010799
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Understanding Double Descent Using VC-Theoretical Framework.
    Lee EH; Cherkassky V
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; PP():. PubMed ID: 38669171
    [TBL] [Abstract][Full Text] [Related]  

  • 31. To understand double descent, we need to understand VC theory.
    Cherkassky V; Lee EH
    Neural Netw; 2024 Jan; 169():242-256. PubMed ID: 37913656
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Implicit Regularization of Dropout.
    Zhang Z; Xu ZJ
    IEEE Trans Pattern Anal Mach Intell; 2024 Jun; 46(6):4206-4217. PubMed ID: 38261480
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Fast generalization error bound of deep learning without scale invariance of activation functions.
    Terada Y; Hirose R
    Neural Netw; 2020 Sep; 129():344-358. PubMed ID: 32593931
    [TBL] [Abstract][Full Text] [Related]  

  • 34. A Theoretical Insight Into the Effect of Loss Function for Deep Semantic-Preserving Learning.
    Akbari A; Awais M; Bashar M; Kittler J
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):119-133. PubMed ID: 34283721
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Another look at statistical learning theory and regularization.
    Cherkassky V; Ma Y
    Neural Netw; 2009 Sep; 22(7):958-69. PubMed ID: 19443179
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Bayesian Weight Decay on Bounded Approximation for Deep Convolutional Neural Networks.
    Park JG; Jo S
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2866-2875. PubMed ID: 30668505
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Geometry of Energy Landscapes and the Optimizability of Deep Neural Networks.
    Becker S; Zhang Y; Lee AA
    Phys Rev Lett; 2020 Mar; 124(10):108301. PubMed ID: 32216422
    [TBL] [Abstract][Full Text] [Related]  

  • 38. The Q-norm complexity measure and the minimum gradient method: a novel approach to the machine learning structural risk minimization problem.
    Vieira DA; Takahashi RH; Palade V; Vasconcelos JA; Caminhas WM
    IEEE Trans Neural Netw; 2008 Aug; 19(8):1415-30. PubMed ID: 18701371
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Data-informed deep optimization.
    Zhang L; Xu ZJ; Zhang Y
    PLoS One; 2022; 17(6):e0270191. PubMed ID: 35737694
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Deep learning for electroencephalogram (EEG) classification tasks: a review.
    Craik A; He Y; Contreras-Vidal JL
    J Neural Eng; 2019 Jun; 16(3):031001. PubMed ID: 30808014
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.