These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

110 related articles for article (PubMed ID: 36745941)

  • 21. Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks.
    Chen Z; Xu TB; Du C; Liu CL; He H
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):799-813. PubMed ID: 32275616
    [TBL] [Abstract][Full Text] [Related]  

  • 22. ModuleNet: Knowledge-Inherited Neural Architecture Search.
    Chen Y; Gao R; Liu F; Zhao D
    IEEE Trans Cybern; 2022 Nov; 52(11):11661-11671. PubMed ID: 34097629
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Deep Neural Network Compression by In-Parallel Pruning-Quantization.
    Tung F; Mori G
    IEEE Trans Pattern Anal Mach Intell; 2020 Mar; 42(3):568-579. PubMed ID: 30561340
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Evaluation of Deep Neural Network Compression Methods for Edge Devices Using Weighted Score-Based Ranking Scheme.
    Ademola OA; Leier M; Petlenkov E
    Sensors (Basel); 2021 Nov; 21(22):. PubMed ID: 34833610
    [TBL] [Abstract][Full Text] [Related]  

  • 25. LAP: Latency-aware automated pruning with dynamic-based filter selection.
    Chen Z; Liu C; Yang W; Li K; Li K
    Neural Netw; 2022 Aug; 152():407-418. PubMed ID: 35609502
    [TBL] [Abstract][Full Text] [Related]  

  • 26. DMPP: Differentiable multi-pruner and predictor for neural network pruning.
    Li J; Zhao B; Liu D
    Neural Netw; 2022 Mar; 147():103-112. PubMed ID: 34998270
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference.
    Hawks B; Duarte J; Fraser NJ; Pappalardo A; Tran N; Umuroglu Y
    Front Artif Intell; 2021; 4():676564. PubMed ID: 34308339
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Non-Structured DNN Weight Pruning-Is It Beneficial in Any Platform?
    Ma X; Lin S; Ye S; He Z; Zhang L; Yuan G; Tan SH; Li Z; Fan D; Qian X; Lin X; Ma K; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4930-4944. PubMed ID: 33735086
    [TBL] [Abstract][Full Text] [Related]  

  • 29. ECG data compression using a neural network model based on multi-objective optimization.
    Zhang B; Zhao J; Chen X; Wu J
    PLoS One; 2017; 12(10):e0182500. PubMed ID: 28972986
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms.
    Kulkarni U; S M M; Gurlahosur SV; Bhogar G
    Neural Netw; 2021 Apr; 136():28-39. PubMed ID: 33429131
    [TBL] [Abstract][Full Text] [Related]  

  • 31. LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks.
    Tartaglione E; Bragagnolo A; Fiandrotti A; Grangetto M
    Neural Netw; 2022 Feb; 146():230-237. PubMed ID: 34906759
    [TBL] [Abstract][Full Text] [Related]  

  • 32. One-Shot Neural Architecture Search by Dynamically Pruning Supernet in Hierarchical Order.
    Zhang J; Li D; Wang L; Zhang L
    Int J Neural Syst; 2021 Jul; 31(7):2150029. PubMed ID: 34128778
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Structured pruning of recurrent neural networks through neuron selection.
    Wen L; Zhang X; Bai H; Xu Z
    Neural Netw; 2020 Mar; 123():134-141. PubMed ID: 31855748
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Joint design and compression of convolutional neural networks as a Bi-level optimization problem.
    Louati H; Bechikh S; Louati A; Aldaej A; Said LB
    Neural Comput Appl; 2022; 34(17):15007-15029. PubMed ID: 35599971
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Small Network for Lightweight Task in Computer Vision: A Pruning Method Based on Feature Representation.
    Ge Y; Lu S; Gao F
    Comput Intell Neurosci; 2021; 2021():5531023. PubMed ID: 33959156
    [TBL] [Abstract][Full Text] [Related]  

  • 36. StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.
    Zhang T; Ye S; Feng X; Ma X; Zhang K; Li Z; Tang J; Liu S; Lin X; Liu Y; Fardad M; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2022 May; 33(5):2259-2273. PubMed ID: 33587706
    [TBL] [Abstract][Full Text] [Related]  

  • 37. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks.
    Poyatos J; Molina D; Martinez AD; Del Ser J; Herrera F
    Neural Netw; 2023 Jan; 158():59-82. PubMed ID: 36442374
    [TBL] [Abstract][Full Text] [Related]  

  • 38. DeepCompNet: A Novel Neural Net Model Compression Architecture.
    Mary Shanthi Rani M; Chitra P; Lakshmanan S; Kalpana Devi M; Sangeetha R; Nithya S
    Comput Intell Neurosci; 2022; 2022():2213273. PubMed ID: 35242176
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Self-Distillation: Towards Efficient and Compact Neural Networks.
    Zhang L; Bao C; Ma K
    IEEE Trans Pattern Anal Mach Intell; 2022 Aug; 44(8):4388-4403. PubMed ID: 33735074
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Coarse-Grained Pruning of Neural Network Models Based on Blocky Sparse Structure.
    Huang L; Zeng J; Sun S; Wang W; Wang Y; Wang K
    Entropy (Basel); 2021 Aug; 23(8):. PubMed ID: 34441182
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.