These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 34890336)

  • 21. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Neural Network Training With Asymmetric Crosspoint Elements.
    Onen M; Gokmen T; Todorov TK; Nowicki T; Del Alamo JA; Rozen J; Haensch W; Kim S
    Front Artif Intell; 2022; 5():891624. PubMed ID: 35615470
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Incremental PID Controller-Based Learning Rate Scheduler for Stochastic Gradient Descent.
    Wang Z; Zhang J
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):7060-7071. PubMed ID: 36288221
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Mitigating carbon footprint for knowledge distillation based deep learning model compression.
    Rafat K; Islam S; Mahfug AA; Hossain MI; Rahman F; Momen S; Rahman S; Mohammed N
    PLoS One; 2023; 18(5):e0285668. PubMed ID: 37186614
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Acceleration of Deep Neural Network Training Using Field Programmable Gate Arrays.
    Tufa GT; Andargie FA; Bijalwan A
    Comput Intell Neurosci; 2022; 2022():8387364. PubMed ID: 36299439
    [TBL] [Abstract][Full Text] [Related]  

  • 27. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics.
    Fioresi R; Chaudhari P; Soatto S
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285876
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Improving Deep Neural Networks' Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum.
    Wang B; Ye Q
    IEEE Trans Neural Netw Learn Syst; 2024 Sep; 35(9):12288-12300. PubMed ID: 37030680
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Algorithm for Training Neural Networks on Resistive Device Arrays.
    Gokmen T; Haensch W
    Front Neurosci; 2020; 14():103. PubMed ID: 32174807
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Sign-Based Gradient Descent With Heterogeneous Data: Convergence and Byzantine Resilience.
    Jin R; Liu Y; Huang Y; He X; Wu T; Dai H
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38215315
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent.
    De Sa C; Feldman M; Ré C; Olukotun K
    Proc Int Symp Comput Archit; 2017 Jun; 2017():561-574. PubMed ID: 29391770
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.
    Junaid M; Aliev H; Park S; Kim H; Yoo H; Sim S
    Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610356
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Training memristor-based multilayer neuromorphic networks with SGD, momentum and adaptive learning rates.
    Yan Z; Chen J; Hu R; Huang T; Chen Y; Wen S
    Neural Netw; 2020 Aug; 128():142-149. PubMed ID: 32446191
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.
    Dutta S; Schafer C; Gomez J; Ni K; Joshi S; Datta S
    Front Neurosci; 2020; 14():634. PubMed ID: 32670012
    [TBL] [Abstract][Full Text] [Related]  

  • 36. SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.
    Liu F; Zhao W; Chen Y; Wang Z; Yang T; Jiang L
    Front Neurosci; 2021; 15():756876. PubMed ID: 34803591
    [TBL] [Abstract][Full Text] [Related]  

  • 37. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.
    Dorzhigulov A; Saxena V
    Front Neurosci; 2023; 17():1177592. PubMed ID: 37534034
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations.
    Gokmen T; Vlasov Y
    Front Neurosci; 2016; 10():333. PubMed ID: 27493624
    [TBL] [Abstract][Full Text] [Related]  

  • 40. SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.
    Lee CH; Fedorov I; Rao BD; Garudadri H
    Proc IEEE Int Conf Acoust Speech Signal Process; 2020 May; 2020():5410-5414. PubMed ID: 33162834
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.