BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

188 related articles for article (PubMed ID: 34554918)

  • 1. An Optimal Transport Analysis on Generalization in Deep Learning.
    Zhang J; Liu T; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; 34(6):2842-2853. PubMed ID: 34554918
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.
    Zhang J; Liu T; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37585328
    [TBL] [Abstract][Full Text] [Related]  

  • 3. The Vapnik-Chervonenkis dimension of graph and recursive neural networks.
    Scarselli F; Tsoi AC; Hagenbuchner M
    Neural Netw; 2018 Dec; 108():248-259. PubMed ID: 30219742
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Towards a Unified Theory of Learning and Information.
    Alabdulmohsin I
    Entropy (Basel); 2020 Apr; 22(4):. PubMed ID: 33286212
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Synergistic Integration of Deep Neural Networks and Finite Element Method with Applications of Nonlinear Large Deformation Biomechanics.
    Liang L; Liu M; Elefteriades J; Sun W
    Comput Methods Appl Mech Eng; 2023 Nov; 416():. PubMed ID: 38370344
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Universal mean-field upper bound for the generalization gap of deep neural networks.
    Ariosto S; Pacelli R; Ginelli F; Gherardi M; Rotondo P
    Phys Rev E; 2022 Jun; 105(6-1):064309. PubMed ID: 35854557
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation.
    Kearns M; Ron D
    Neural Comput; 1999 Aug; 11(6):1427-53. PubMed ID: 10423502
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A local Vapnik-Chervonenkis complexity.
    Oneto L; Anguita D; Ridella S
    Neural Netw; 2016 Oct; 82():62-75. PubMed ID: 27474843
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Theoretical Insight Into the Effect of Loss Function for Deep Semantic-Preserving Learning.
    Akbari A; Awais M; Bashar M; Kittler J
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):119-133. PubMed ID: 34283721
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Orthogonal Deep Neural Networks.
    Li S; Jia K; Wen Y; Liu T; Tao D
    IEEE Trans Pattern Anal Mach Intell; 2021 Apr; 43(4):1352-1368. PubMed ID: 31634826
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On Cross-Corpus Generalization of Deep Learning Based Speech Enhancement.
    Pandey A; Wang D
    IEEE/ACM Trans Audio Speech Lang Process; 2020; 28():2489-2499. PubMed ID: 33748327
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Towards more practical average bounds on supervised learning.
    Gu H; Takahashi H
    IEEE Trans Neural Netw; 1996; 7(4):953-68. PubMed ID: 18263490
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Theoretical bounds of generalization error for generalized extreme learning machine and random vector functional link network.
    Kim M
    Neural Netw; 2023 Jul; 164():49-66. PubMed ID: 37146449
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness.
    Jin P; Lu L; Tang Y; Karniadakis GE
    Neural Netw; 2020 Oct; 130():85-99. PubMed ID: 32650153
    [TBL] [Abstract][Full Text] [Related]  

  • 15. On the practical applicability of VC dimension bounds.
    Holden SB; Niranjan M
    Neural Comput; 1995 Nov; 7(6):1265-88. PubMed ID: 7584902
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Examining the Causal Structures of Deep Neural Networks Using Information Theory.
    Marrow S; Michaud EJ; Hoel E
    Entropy (Basel); 2020 Dec; 22(12):. PubMed ID: 33353094
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Causal importance of low-level feature selectivity for generalization in image recognition.
    Ukita J
    Neural Netw; 2020 May; 125():185-193. PubMed ID: 32145648
    [TBL] [Abstract][Full Text] [Related]  

  • 18. To understand double descent, we need to understand VC theory.
    Cherkassky V; Lee EH
    Neural Netw; 2024 Jan; 169():242-256. PubMed ID: 37913656
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The Data Efficiency of Deep Learning Is Degraded by Unnecessary Input Dimensions.
    D'Amario V; Srivastava S; Sasaki T; Boix X
    Front Comput Neurosci; 2022; 16():760085. PubMed ID: 35173595
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Why Is Everyone Training Very Deep Neural Network With Skip Connections?
    Oyedotun OK; Ismaeil KA; Aouada D
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5961-5975. PubMed ID: 34986102
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.