These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

158 related articles for article (PubMed ID: 34048338)

  • 1. Transform Quantization for CNN Compression.
    Young SI; Zhe W; Taubman D; Girod B
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):5700-5714. PubMed ID: 34048338
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A Hardware-Friendly Low-Bit Power-of-Two Quantization Method for CNNs and Its FPGA Implementation.
    Sui X; Lv Q; Bai Y; Zhu B; Zhi L; Yang Y; Tan Z
    Sensors (Basel); 2022 Sep; 22(17):. PubMed ID: 36081072
    [TBL] [Abstract][Full Text] [Related]  

  • 3. MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation.
    Zhang R; Chung ACS
    Med Image Anal; 2021 Oct; 73():102200. PubMed ID: 34416578
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A General Rate-Distortion Optimization Method for Block Compressed Sensing of Images.
    Chen Q; Chen D; Gong J
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682078
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Whether the Support Region of Three-Bit Uniform Quantizer Has a Strong Impact on Post-Training Quantization for MNIST Dataset?
    Nikolić J; Perić Z; Aleksić D; Tomić S; Jovanović A
    Entropy (Basel); 2021 Dec; 23(12):. PubMed ID: 34946005
    [TBL] [Abstract][Full Text] [Related]  

  • 7. SensiMix: Sensitivity-Aware 8-bit index & 1-bit value mixed precision quantization for BERT compression.
    Piao T; Cho I; Kang U
    PLoS One; 2022; 17(4):e0265621. PubMed ID: 35436295
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Single-Path Bit Sharing for Automatic Loss-Aware Model Compression.
    Liu J; Zhuang B; Chen P; Shen C; Cai J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12459-12473. PubMed ID: 37167046
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Convolutional Neural Network-Based Quantization Method for Block Compressed Sensing of Images.
    Gong J; Chen Q; Zhu W; Wang Z
    Entropy (Basel); 2024 May; 26(6):. PubMed ID: 38920476
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adaptive Global Power-of-Two Ternary Quantization Algorithm Based on Unfixed Boundary Thresholds.
    Sui X; Lv Q; Ke C; Li M; Zhuang M; Yu H; Tan Z
    Sensors (Basel); 2023 Dec; 24(1):. PubMed ID: 38203043
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training.
    Liu C; Zhang X; Zhang R; Li L; Zhou S; Huang D; Li Z; Du Z; Liu S; Chen T
    IEEE Trans Image Process; 2022; 31():7006-7019. PubMed ID: 36322492
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Deep Network Quantization via Error Compensation.
    Peng H; Wu J; Zhang Z; Chen S; Zhang HT
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4960-4970. PubMed ID: 33852390
    [TBL] [Abstract][Full Text] [Related]  

  • 14. QTTNet: Quantized tensor train neural networks for 3D object and video recognition.
    Lee D; Wang D; Yang Y; Deng L; Zhao G; Li G
    Neural Netw; 2021 Sep; 141():420-432. PubMed ID: 34146969
    [TBL] [Abstract][Full Text] [Related]  

  • 15. IVS-Caffe-Hardware-Oriented Neural Network Model Development.
    Tsai CC; Guo JI
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5978-5992. PubMed ID: 34310321
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Low-Complexity Rate-Distortion Optimization of Sampling Rate and Bit-Depth for Compressed Sensing of Images.
    Chen Q; Chen D; Gong J; Ruan J
    Entropy (Basel); 2020 Jan; 22(1):. PubMed ID: 33285900
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Convolutional Neural Networks Quantization with Double-Stage Squeeze-and-Threshold.
    Wu B; Waschneck B; Mayr CG
    Int J Neural Syst; 2022 Dec; 32(12):2250051. PubMed ID: 36164719
    [TBL] [Abstract][Full Text] [Related]  

  • 18. FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations.
    Tao C; Lin R; Chen Q; Zhang Z; Luo P; Wong N
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2640-2654. PubMed ID: 35867358
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Unsupervised Network Quantization via Fixed-Point Factorization.
    Wang P; He X; Chen Q; Cheng A; Liu Q; Cheng J
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2706-2720. PubMed ID: 32706647
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Comparison between adaptive search and bit allocation algorithms for image compression using vector quantization.
    Liang KM; Huang CM; Harris RW
    IEEE Trans Image Process; 1995; 4(7):1020-3. PubMed ID: 18290051
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.