These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 36322492)

  • 21. Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC.
    Ren Y; Cao Y; Ye C; Cheng X
    Sci Rep; 2023 Jul; 13(1):11658. PubMed ID: 37468562
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Convolutional Neural Networks Quantization with Double-Stage Squeeze-and-Threshold.
    Wu B; Waschneck B; Mayr CG
    Int J Neural Syst; 2022 Dec; 32(12):2250051. PubMed ID: 36164719
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Quantformer: Learning Extremely Low-Precision Vision Transformers.
    Wang Z; Wang C; Xu X; Zhou J; Lu J
    IEEE Trans Pattern Anal Mach Intell; 2023 Jul; 45(7):8813-8826. PubMed ID: 37015428
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Low precision decentralized distributed training over IID and non-IID data.
    Aketi SA; Kodge S; Roy K
    Neural Netw; 2022 Nov; 155():451-460. PubMed ID: 36152377
    [TBL] [Abstract][Full Text] [Related]  

  • 25. GradFreeBits: Gradient-Free Bit Allocation for Mixed-Precision Neural Networks.
    Bodner BJ; Ben-Shalom G; Treister E
    Sensors (Basel); 2022 Dec; 22(24):. PubMed ID: 36560141
    [TBL] [Abstract][Full Text] [Related]  

  • 26. A Hardware-Friendly Low-Bit Power-of-Two Quantization Method for CNNs and Its FPGA Implementation.
    Sui X; Lv Q; Bai Y; Zhu B; Zhi L; Yang Y; Tan Z
    Sensors (Basel); 2022 Sep; 22(17):. PubMed ID: 36081072
    [TBL] [Abstract][Full Text] [Related]  

  • 27. IVS-Caffe-Hardware-Oriented Neural Network Model Development.
    Tsai CC; Guo JI
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5978-5992. PubMed ID: 34310321
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.
    Zhang S; Chen H; Liu M; Zhang Q
    Sensors (Basel); 2017 Nov; 17(11):. PubMed ID: 29112117
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Deep Network Quantization via Error Compensation.
    Peng H; Wu J; Zhang Z; Chen S; Zhang HT
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4960-4970. PubMed ID: 33852390
    [TBL] [Abstract][Full Text] [Related]  

  • 31. IPAD: Intensity Potential for Adaptive De-Quantization.
    Liu J; Zhai G; Liu A; Yang X; Zhao X; Chen CW
    IEEE Trans Image Process; 2018 Oct; 27(10):4860-4872. PubMed ID: 29969397
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.
    Kulkarni U; S M M; Gurlahosur SV; Bhogar G
    Neural Netw; 2021 Apr; 136():28-39. PubMed ID: 33429131
    [TBL] [Abstract][Full Text] [Related]  

  • 33. FPGA-Based Hybrid-Type Implementation of Quantized Neural Networks for Remote Sensing Applications.
    Wei X; Liu W; Chen L; Ma L; Chen H; Zhuang Y
    Sensors (Basel); 2019 Feb; 19(4):. PubMed ID: 30813259
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
    Stutz D; Chandramoorthy N; Hein M; Schiele B
    IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3632-3647. PubMed ID: 37815955
    [TBL] [Abstract][Full Text] [Related]  

  • 35. A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks.
    Long X; Zeng X; Ben Z; Zhou D; Zhang M
    Comput Intell Neurosci; 2020; 2020():7839064. PubMed ID: 32148472
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Quantization and Deployment of Deep Neural Networks on Microcontrollers.
    Novac PE; Boukli Hacene G; Pegatoquet A; Miramond B; Gripon V
    Sensors (Basel); 2021 Apr; 21(9):. PubMed ID: 33922868
    [TBL] [Abstract][Full Text] [Related]  

  • 37. PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers.
    Li Z; Chen M; Xiao J; Gu Q
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; PP():. PubMed ID: 37578910
    [TBL] [Abstract][Full Text] [Related]  

  • 38. FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations.
    Tao C; Lin R; Chen Q; Zhang Z; Luo P; Wong N
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2640-2654. PubMed ID: 35867358
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Design of a 2-Bit Neural Network Quantizer for Laplacian Source.
    Perić Z; Savić M; Simić N; Denić B; Despotović V
    Entropy (Basel); 2021 Jul; 23(8):. PubMed ID: 34441074
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Towards Codebook-Free Deep Probabilistic Quantization for Image Retrieval.
    Wang M; Zhou W; Yao X; Tian Q; Li H
    IEEE Trans Pattern Anal Mach Intell; 2024 Jan; 46(1):626-640. PubMed ID: 37831563
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.