These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

107 related articles for article (PubMed ID: 37141057)

  • 1. Diverse Sample Generation: Pushing the Limit of Generative Data-Free Quantization.
    Qin H; Ding Y; Zhang X; Wang J; Liu X; Lu J
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):11689-11706. PubMed ID: 37141057
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Long-range zero-shot generative deep network quantization.
    Luo Y; Gao Y; Zhang Z; Fan J; Zhang H; Xu M
    Neural Netw; 2023 Sep; 166():683-691. PubMed ID: 37604077
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Optimization-Based Post-Training Quantization With Bit-Split and Stitching.
    Wang P; Chen W; He X; Chen Q; Liu Q; Cheng J
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):2119-2135. PubMed ID: 35290185
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training.
    Liu C; Zhang X; Zhang R; Li L; Zhou S; Huang D; Li Z; Du Z; Liu S; Chen T
    IEEE Trans Image Process; 2022; 31():7006-7019. PubMed ID: 36322492
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Data Quality-Aware Mixed-Precision Quantization via Hybrid Reinforcement Learning.
    Wang Y; Guo S; Guo J; Zhang Y; Zhang W; Zheng Q; Zhang J
    IEEE Trans Neural Netw Learn Syst; 2024 Jun; PP():. PubMed ID: 38900615
    [TBL] [Abstract][Full Text] [Related]  

  • 7. MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation.
    Zhang R; Chung ACS
    Med Image Anal; 2021 Oct; 73():102200. PubMed ID: 34416578
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Training Faster by Separating Modes of Variation in Batch-Normalized Models.
    Kalayeh MM; Shah M
    IEEE Trans Pattern Anal Mach Intell; 2020 Jun; 42(6):1483-1500. PubMed ID: 30703010
    [TBL] [Abstract][Full Text] [Related]  

  • 10. L1 -Norm Batch Normalization for Efficient Training of Deep Neural Networks.
    Wu S; Li G; Deng L; Liu L; Wu D; Xie Y; Shi L
    IEEE Trans Neural Netw Learn Syst; 2019 Jul; 30(7):2043-2051. PubMed ID: 30418924
    [TBL] [Abstract][Full Text] [Related]  

  • 11. PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers.
    Li Z; Chen M; Xiao J; Gu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37578910
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Whether the Support Region of Three-Bit Uniform Quantizer Has a Strong Impact on Post-Training Quantization for MNIST Dataset?
    Nikolić J; Perić Z; Aleksić D; Tomić S; Jovanović A
    Entropy (Basel); 2021 Dec; 23(12):. PubMed ID: 34946005
    [TBL] [Abstract][Full Text] [Related]  

  • 13. SensiMix: Sensitivity-Aware 8-bit index & 1-bit value mixed precision quantization for BERT compression.
    Piao T; Cho I; Kang U
    PLoS One; 2022; 17(4):e0265621. PubMed ID: 35436295
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Degree-Aware Graph Neural Network Quantization.
    Fan Z; Jin X
    Entropy (Basel); 2023 Nov; 25(11):. PubMed ID: 37998202
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Transform Quantization for CNN Compression.
    Young SI; Zhe W; Taubman D; Girod B
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):5700-5714. PubMed ID: 34048338
    [TBL] [Abstract][Full Text] [Related]  

  • 16. GradFreeBits: Gradient-Free Bit Allocation for Mixed-Precision Neural Networks.
    Bodner BJ; Ben-Shalom G; Treister E
    Sensors (Basel); 2022 Dec; 22(24):. PubMed ID: 36560141
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Deep Network Quantization via Error Compensation.
    Peng H; Wu J; Zhang Z; Chen S; Zhang HT
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4960-4970. PubMed ID: 33852390
    [TBL] [Abstract][Full Text] [Related]  

  • 18. QARV: Quantization-Aware ResNet VAE for Lossy Image Compression.
    Duan Z; Lu M; Ma J; Huang Y; Ma Z; Zhu F
    IEEE Trans Pattern Anal Mach Intell; 2024 Jan; 46(1):436-450. PubMed ID: 37812557
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Towards Codebook-Free Deep Probabilistic Quantization for Image Retrieval.
    Wang M; Zhou W; Yao X; Tian Q; Li H
    IEEE Trans Pattern Anal Mach Intell; 2024 Jan; 46(1):626-640. PubMed ID: 37831563
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference.
    Hawks B; Duarte J; Fraser NJ; Pappalardo A; Tran N; Umuroglu Y
    Front Artif Intell; 2021; 4():676564. PubMed ID: 34308339
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.