These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 37812605)

  • 1. HMC: Hybrid model compression method based on layer sensitivity grouping.
    Yang G; Yu S; Yang H; Nie Z; Wang J
    PLoS One; 2023; 18(10):e0292517. PubMed ID: 37812605
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hybrid tensor decomposition in neural network compression.
    Wu B; Wang D; Zhao G; Deng L; Li G
    Neural Netw; 2020 Dec; 132():309-320. PubMed ID: 32977276
    [TBL] [Abstract][Full Text] [Related]  

  • 3. EDP: An Efficient Decomposition and Pruning Scheme for Convolutional Neural Network Compression.
    Ruan X; Liu Y; Yuan C; Li B; Hu W; Li Y; Maybank S
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4499-4513. PubMed ID: 33136545
    [TBL] [Abstract][Full Text] [Related]  

  • 4. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Weak sub-network pruning for strong and efficient neural networks.
    Guo Q; Wu XJ; Kittler J; Feng Z
    Neural Netw; 2021 Dec; 144():614-626. PubMed ID: 34653719
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Discrimination-Aware Network Pruning for Deep Model Compression.
    Liu J; Zhuang B; Zhuang Z; Guo Y; Huang J; Zhu J; Tan M
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4035-4051. PubMed ID: 33755553
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Perturbation of deep autoencoder weights for model compression and classification of tabular data.
    Abrar S; Samad MD
    Neural Netw; 2022 Dec; 156():160-169. PubMed ID: 36270199
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Heuristic Method for Minimizing Model Size of CNN by Combining Multiple Pruning Techniques.
    Tian D; Yamagiwa S; Wada K
    Sensors (Basel); 2022 Aug; 22(15):. PubMed ID: 35957431
    [TBL] [Abstract][Full Text] [Related]  

  • 9. LAP: Latency-aware automated pruning with dynamic-based filter selection.
    Chen Z; Liu C; Yang W; Li K; Li K
    Neural Netw; 2022 Aug; 152():407-418. PubMed ID: 35609502
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Low tensor train and low multilinear rank approximations of 3D tensors for compression and de-speckling of optical coherence tomography images.
    Kopriva I; Shi F; Lai M; Štanfel M; Chen H; Chen X
    Phys Med Biol; 2023 Jun; 68(12):. PubMed ID: 37201537
    [No Abstract]   [Full Text] [Related]  

  • 11. Feature flow regularization: Improving structured sparsity in deep neural networks.
    Wu Y; Lan Y; Zhang L; Xiang Y
    Neural Netw; 2023 Apr; 161():598-613. PubMed ID: 36822145
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Random pruning: channel sparsity by expectation scaling factor.
    Sun C; Chen J; Li Y; Wang W; Ma T
    PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Nonlinear tensor train format for deep neural network compression.
    Wang D; Zhao G; Chen H; Liu Z; Deng L; Li G
    Neural Netw; 2021 Dec; 144():320-333. PubMed ID: 34547670
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Accelerated MR parameter mapping with low-rank and sparsity constraints.
    Zhao B; Lu W; Hitchens TK; Lam F; Ho C; Liang ZP
    Magn Reson Med; 2015 Aug; 74(2):489-98. PubMed ID: 25163720
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Compressing 3DCNNs based on tensor train decomposition.
    Wang D; Zhao G; Li G; Deng L; Wu Y
    Neural Netw; 2020 Nov; 131():215-230. PubMed ID: 32805632
    [TBL] [Abstract][Full Text] [Related]  

  • 16. ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.
    Zhong Z; Wei F; Lin Z; Zhang C
    Neural Netw; 2019 Feb; 110():104-115. PubMed ID: 30508807
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application.
    Zhao M; Tong X; Wu W; Wang Z; Zhou B; Huang X
    Sensors (Basel); 2022 Jul; 22(15):. PubMed ID: 35957176
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?
    Ma X; Lin S; Ye S; He Z; Zhang L; Yuan G; Tan SH; Li Z; Fan D; Qian X; Lin X; Ma K; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4930-4944. PubMed ID: 33735086
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Auxiliary Pneumonia Classification Algorithm Based on Pruning Compression.
    Yang CP; Zhu JQ; Yan T; Su QL; Zheng LX
    Comput Math Methods Med; 2022; 2022():8415187. PubMed ID: 35898478
    [TBL] [Abstract][Full Text] [Related]  

  • 20. ACSL: Adaptive correlation-driven sparsity learning for deep neural network compression.
    He W; Wu M; Lam SK
    Neural Netw; 2021 Dec; 144():465-477. PubMed ID: 34600219
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.