These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

104 related articles for article (PubMed ID: 34029196)

  • 1. Drill the Cork of Information Bottleneck by Inputting the Most Important Data.
    Peng X; Zhang J; Wang FY; Li L
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6360-6372. PubMed ID: 34029196
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme.
    Peng X; Wang FY; Li L
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7910-7920. PubMed ID: 35157598
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima.
    Feng Y; Tu Y
    Proc Natl Acad Sci U S A; 2021 Mar; 118(9):. PubMed ID: 33619091
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial Information Bottleneck.
    Zhai P; Zhang S
    IEEE Trans Neural Netw Learn Syst; 2022 May; PP():. PubMed ID: 35594234
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 9. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.
    Amjad RA; Geiger BC
    IEEE Trans Pattern Anal Mach Intell; 2020 Sep; 42(9):2225-2239. PubMed ID: 30951462
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Accelerating DNN Training Through Selective Localized Learning.
    Krithivasan S; Sen S; Venkataramani S; Raghunathan A
    Front Neurosci; 2021; 15():759807. PubMed ID: 35087370
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Noise Helps Optimization Escape From Saddle Points in the Synaptic Plasticity.
    Fang Y; Yu Z; Chen F
    Front Neurosci; 2020; 14():343. PubMed ID: 32410937
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Understanding Short-Range Memory Effects in Deep Neural Networks.
    Tan C; Zhang J; Liu J
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; 35(8):10576-10590. PubMed ID: 37027555
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Faster Stochastic Quasi-Newton Methods.
    Zhang Q; Huang F; Deng C; Huang H
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4388-4397. PubMed ID: 33667166
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Weighted SGD for ℓ
    Yang J; Chow YL; Ré C; Mahoney MW
    Proc Annu ACM SIAM Symp Discret Algorithms; 2016 Jan; 2016():558-569. PubMed ID: 29782626
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Communication-Censored Distributed Stochastic Gradient Descent.
    Li W; Wu Z; Chen T; Li L; Ling Q
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6831-6843. PubMed ID: 34086584
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Personalized On-Device E-Health Analytics With Decentralized Block Coordinate Descent.
    Ye G; Yin H; Chen T; Xu M; Nguyen QVH; Song J
    IEEE J Biomed Health Inform; 2022 Jun; 26(6):2778-2786. PubMed ID: 34986109
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Information Bottleneck Theory Based Exploration of Cascade Learning.
    Du X; Farrahi K; Niranjan M
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682084
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.