These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

149 related articles for article (PubMed ID: 33735086)

  • 1. Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?
    Ma X; Lin S; Ye S; He Z; Zhang L; Yuan G; Tan SH; Li Z; Fan D; Qian X; Lin X; Ma K; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4930-4944. PubMed ID: 33735086
    [TBL] [Abstract][Full Text] [Related]  

  • 2. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Hardware-Friendly High-Precision CNN Pruning Method and Its FPGA Implementation.
    Sui X; Lv Q; Zhi L; Zhu B; Yang Y; Zhang Y; Tan Z
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679624
    [TBL] [Abstract][Full Text] [Related]  

  • 4. GRIM: A General, Real-Time Deep Learning Inference Framework for Mobile Devices Based on Fine-Grained Structured Weight Sparsity.
    Niu W; Li Z; Ma X; Dong P; Zhou G; Qian X; Lin X; Wang Y; Ren B
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):6224-6239. PubMed ID: 34133272
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.
    Wu X; Lin DT; Chen R; Bhattacharyya SS
    J Neural Eng; 2023 Jul; 20(4):. PubMed ID: 37429288
    [No Abstract]   [Full Text] [Related]  

  • 6. Resource-constrained FPGA/DNN co-design.
    Zhang Z; Kouzani AZ
    Neural Comput Appl; 2021; 33(21):14741-14751. PubMed ID: 34025038
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning.
    Lin S; Ji R; Li Y; Deng C; Li X
    IEEE Trans Neural Netw Learn Syst; 2020 Feb; 31(2):574-588. PubMed ID: 30990448
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Hardware-Friendly Low-Bit Power-of-Two Quantization Method for CNNs and Its FPGA Implementation.
    Sui X; Lv Q; Bai Y; Zhu B; Zhi L; Yang Y; Tan Z
    Sensors (Basel); 2022 Sep; 22(17):. PubMed ID: 36081072
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Dynamic Probabilistic Pruning: A General Framework for Hardware-Constrained Pruning at Different Granularities.
    Gonzalez-Carabarin L; Huijben IAM; Veeling B; Schmid A; van Sloun RJG
    IEEE Trans Neural Netw Learn Syst; 2022 Jun; PP():. PubMed ID: 35675247
    [TBL] [Abstract][Full Text] [Related]  

  • 10. SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
    Chen X; Zhao Y; Wang Y; Xu P; You H; Li C; Fu Y; Lin Y; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7099-7113. PubMed ID: 35235521
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Deep Neural Network Compression by In-Parallel Pruning-Quantization.
    Tung F; Mori G
    IEEE Trans Pattern Anal Mach Intell; 2020 Mar; 42(3):568-579. PubMed ID: 30561340
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference.
    Hawks B; Duarte J; Fraser NJ; Pappalardo A; Tran N; Umuroglu Y
    Front Artif Intell; 2021; 4():676564. PubMed ID: 34308339
    [TBL] [Abstract][Full Text] [Related]  

  • 14. From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks.
    Geng X; Wang Z; Chen C; Xu Q; Xu K; Jin C; Gupta M; Yang X; Chen Z; Aly MMS; Lin J; Wu M; Li X
    IEEE Trans Neural Netw Learn Syst; 2024 Jun; PP():. PubMed ID: 38875092
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Weight-adaptive joint mixed-precision quantization and pruning for neural network-based equalization in short-reach direct detection links.
    Xu Z; Wu Q; Lu W; Ji H; Chen H; Ji T; Yang Y; Qiao G; Tang J; Cheng C; Liu L; Wang S; Liang J; Wei J; Hu W; Shieh W
    Opt Lett; 2024 Jun; 49(12):3500-3503. PubMed ID: 38875655
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Single-Path Bit Sharing for Automatic Loss-Aware Model Compression.
    Liu J; Zhuang B; Chen P; Shen C; Cai J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12459-12473. PubMed ID: 37167046
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.
    Kulkarni U; S M M; Gurlahosur SV; Bhogar G
    Neural Netw; 2021 Apr; 136():28-39. PubMed ID: 33429131
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Discrimination-Aware Network Pruning for Deep Model Compression.
    Liu J; Zhuang B; Zhuang Z; Guo Y; Huang J; Zhu J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4035-4051. PubMed ID: 33755553
    [TBL] [Abstract][Full Text] [Related]  

  • 20. CRESPR: Modular sparsification of DNNs to improve pruning performance and model interpretability.
    Kang T; Ding W; Chen P
    Neural Netw; 2024 Apr; 172():106067. PubMed ID: 38199151
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.