These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

153 related articles for article (PubMed ID: 35235521)

  • 1. SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
    Chen X; Zhao Y; Wang Y; Xu P; You H; Li C; Fu Y; Lin Y; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7099-7113. PubMed ID: 35235521
    [TBL] [Abstract][Full Text] [Related]  

  • 2. EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems.
    Putra RVW; Hanif MA; Shafique M
    Front Neurosci; 2022; 16():937782. PubMed ID: 36033624
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ETA: An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization.
    Lu J; Ni C; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7660-7674. PubMed ID: 35133969
    [TBL] [Abstract][Full Text] [Related]  

  • 5. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?
    Ma X; Lin S; Ye S; He Z; Zhang L; Yuan G; Tan SH; Li Z; Fan D; Qian X; Lin X; Ma K; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4930-4944. PubMed ID: 33735086
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.
    Dorzhigulov A; Saxena V
    Front Neurosci; 2023; 17():1177592. PubMed ID: 37534034
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.
    Kulkarni U; S M M; Gurlahosur SV; Bhogar G
    Neural Netw; 2021 Apr; 136():28-39. PubMed ID: 33429131
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Progressive Subnetwork Searching Framework for Dynamic Inference.
    Yang L; He Z; Cao Y; Fan D
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3809-3820. PubMed ID: 36063528
    [TBL] [Abstract][Full Text] [Related]  

  • 12. SalvageDNN: salvaging deep neural network accelerators with permanent faults through saliency-driven fault-aware mapping.
    Abdullah Hanif M; Shafique M
    Philos Trans A Math Phys Eng Sci; 2020 Feb; 378(2164):20190164. PubMed ID: 31865875
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.
    Zou C; Cui X; Kuang Y; Liu K; Wang Y; Wang X; Huang R
    Front Neurosci; 2021; 15():694170. PubMed ID: 34867142
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A Low-Latency DNN Accelerator Enabled by DFT-Based Convolution Execution Within Crossbar Arrays.
    Veluri H; Chand U; Chen CK; Thean AV
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 38019632
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Compression of Deep Neural Networks based on quantized tensor decomposition to implement on reconfigurable hardware platforms.
    Nekooei A; Safari S
    Neural Netw; 2022 Jun; 150():350-363. PubMed ID: 35344706
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Accelerating DNN Training Through Selective Localized Learning.
    Krithivasan S; Sen S; Venkataramani S; Raghunathan A
    Front Neurosci; 2021; 15():759807. PubMed ID: 35087370
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Resource-constrained FPGA/DNN co-design.
    Zhang Z; Kouzani AZ
    Neural Comput Appl; 2021; 33(21):14741-14751. PubMed ID: 34025038
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks.
    Cheng J; Wu J; Leng C; Wang Y; Hu Q
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4730-4743. PubMed ID: 29990226
    [TBL] [Abstract][Full Text] [Related]  

  • 19. From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks.
    Geng X; Wang Z; Chen C; Xu Q; Xu K; Jin C; Gupta M; Yang X; Chen Z; Aly MMS; Lin J; Wu M; Li X
    IEEE Trans Neural Netw Learn Syst; 2024 Jun; PP():. PubMed ID: 38875092
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Direct Feedback Alignment With Sparse Connections for Local Learning.
    Crafton B; Parihar A; Gebhardt E; Raychowdhury A
    Front Neurosci; 2019; 13():525. PubMed ID: 31178689
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.