These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

190 related articles for article (PubMed ID: 34890336)

  • 1. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Accelerating DNN Training Through Selective Localized Learning.
    Krithivasan S; Sen S; Venkataramani S; Raghunathan A
    Front Neurosci; 2021; 15():759807. PubMed ID: 35087370
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Enabling Training of Neural Networks on Noisy Hardware.
    Gokmen T
    Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
    Stutz D; Chandramoorthy N; Hein M; Schiele B
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3632-3647. PubMed ID: 37815955
    [TBL] [Abstract][Full Text] [Related]  

  • 6. ETA: An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization.
    Lu J; Ni C; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7660-7674. PubMed ID: 35133969
    [TBL] [Abstract][Full Text] [Related]  

  • 7. SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
    Chen X; Zhao Y; Wang Y; Xu P; You H; Li C; Fu Y; Lin Y; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7099-7113. PubMed ID: 35235521
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 9. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Early Termination Based Training Acceleration for an Energy-Efficient SNN Processor Design.
    Choi S; Lew D; Park J
    IEEE Trans Biomed Circuits Syst; 2022 Jun; 16(3):442-455. PubMed ID: 35687615
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Computational Offloading in Mobile Edge with Comprehensive and Energy Efficient Cost Function: A Deep Learning Approach.
    Abbas ZH; Ali Z; Abbas G; Jiao L; Bilal M; Suh DY; Piran MJ
    Sensors (Basel); 2021 May; 21(10):. PubMed ID: 34069364
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 14. IVS-Caffe-Hardware-Oriented Neural Network Model Development.
    Tsai CC; Guo JI
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5978-5992. PubMed ID: 34310321
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.
    Liu J; Li B; Zhou Y; Zhao X; Zhu J; Zhang M
    Comput Intell Neurosci; 2022; 2022():9337209. PubMed ID: 35694581
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Spike Counts Based Low Complexity SNN Architecture With Binary Synapse.
    Tang H; Kim H; Kim H; Park J
    IEEE Trans Biomed Circuits Syst; 2019 Dec; 13(6):1664-1677. PubMed ID: 31603797
    [TBL] [Abstract][Full Text] [Related]  

  • 17. High-Performance Method and Architecture for Attention Computation in DNN Inference.
    Cheng Q; Hu X; Xiao H; Zhou Y; Duan S
    IEEE Trans Biomed Circuits Syst; 2024 Aug; PP():. PubMed ID: 39088504
    [TBL] [Abstract][Full Text] [Related]  

  • 18. XGrad: Boosting Gradient-Based Optimizers With Weight Prediction.
    Guan L; Li D; Shi Y; Meng J
    IEEE Trans Pattern Anal Mach Intell; 2024 Oct; 46(10):6731-6747. PubMed ID: 38602857
    [TBL] [Abstract][Full Text] [Related]  

  • 19. An Improvised Sentiment Analysis Model on Twitter Data Using Stochastic Gradient Descent (SGD) Optimization Algorithm in Stochastic Gate Neural Network (SGNN).
    Vidyashree KP; Rajendra AB
    SN Comput Sci; 2023; 4(2):190. PubMed ID: 36748096
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Streaming Batch Eigenupdates for Hardware Neural Networks.
    Hoskins BD; Daniels MW; Huang S; Madhavan A; Adam GC; Zhitenev N; McClelland JJ; Stiles MD
    Front Neurosci; 2019; 13():793. PubMed ID: 31447628
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.