These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

94 related articles for article (PubMed ID: 32094327)

  • 1. Complexity control by gradient descent in deep networks.
    Poggio T; Liao Q; Banburski A
    Nat Commun; 2020 Feb; 11(1):1027. PubMed ID: 32094327
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Theoretical issues in deep networks.
    Poggio T; Banburski A; Liao Q
    Proc Natl Acad Sci U S A; 2020 Dec; 117(48):30039-30045. PubMed ID: 32518109
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Theory of adaptive SVD regularization for deep neural networks.
    Bejani MM; Ghatee M
    Neural Netw; 2020 Aug; 128():33-46. PubMed ID: 32413786
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Partial local entropy and anisotropy in deep weight spaces.
    Musso D
    Phys Rev E; 2021 Apr; 103(4-1):042303. PubMed ID: 34005873
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A topological description of loss surfaces based on Betti Numbers.
    Bucarelli MS; D'Inverno GA; Bianchini M; Scarselli F; Silvestri F
    Neural Netw; 2024 Jun; 178():106465. PubMed ID: 38943863
    [TBL] [Abstract][Full Text] [Related]  

  • 7. High-dimensional dynamics of generalization error in neural networks.
    Advani MS; Saxe AM; Sompolinsky H
    Neural Netw; 2020 Dec; 132():428-446. PubMed ID: 33022471
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
    Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y
    Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Learning Deep Gradient Descent Optimization for Image Deconvolution.
    Gong D; Zhang Z; Shi Q; van den Hengel A; Shen C; Zhang Y
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5468-5482. PubMed ID: 32078566
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Reduced HyperBF networks: regularization by explicit complexity reduction and scaled Rprop-based training.
    Mahdi RN; Rouchka EC
    IEEE Trans Neural Netw; 2011 May; 22(5):673-86. PubMed ID: 21421438
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Reformulated radial basis neural networks trained by gradient descent.
    Karayiannis NB
    IEEE Trans Neural Netw; 1999; 10(3):657-71. PubMed ID: 18252566
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Transformed ℓ
    Ma R; Miao J; Niu L; Zhang P
    Neural Netw; 2019 Nov; 119():286-298. PubMed ID: 31499353
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Weight assignment for adaptive image restoration by neural networks.
    Perry SW; Guan L
    IEEE Trans Neural Netw; 2000; 11(1):156-70. PubMed ID: 18249747
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics.
    Fioresi R; Chaudhari P; Soatto S
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Shakeout: A New Approach to Regularized Deep Neural Network Training.
    Kang G; Li J; Tao D
    IEEE Trans Pattern Anal Mach Intell; 2018 May; 40(5):1245-1258. PubMed ID: 28489533
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Natural gradient learning algorithms for RBF networks.
    Zhao J; Wei H; Zhang C; Li W; Guo W; Zhang K
    Neural Comput; 2015 Feb; 27(2):481-505. PubMed ID: 25380332
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Cross-Batch Reference Learning for Deep Retrieval.
    Yang HF; Lin K; Chen TY; Chen CS
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3145-3158. PubMed ID: 31545744
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Networks with trainable amplitude of activation functions.
    Trentin E
    Neural Netw; 2001 May; 14(4-5):471-93. PubMed ID: 11411633
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.