These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

269 related articles for article (PubMed ID: 35694575)

  • 21. A transfer learning with structured filter pruning approach for improved breast cancer classification on point-of-care devices.
    Choudhary T; Mishra V; Goswami A; Sarangapani J
    Comput Biol Med; 2021 Jul; 134():104432. PubMed ID: 33964737
    [TBL] [Abstract][Full Text] [Related]  

  • 22. An FPGA implementation of Bayesian inference with spiking neural networks.
    Li H; Wan B; Fang Y; Li Q; Liu JK; An L
    Front Neurosci; 2023; 17():1291051. PubMed ID: 38249589
    [TBL] [Abstract][Full Text] [Related]  

  • 23. A Pruning Method for Deep Convolutional Network Based on Heat Map Generation Metrics.
    Zhang W; Wang N; Chen K; Liu Y; Zhao T
    Sensors (Basel); 2022 Mar; 22(5):. PubMed ID: 35271168
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Filter pruning for convolutional neural networks in semantic image segmentation.
    López-González CI; Gascó E; Barrientos-Espillco F; Besada-Portas E; Pajares G
    Neural Netw; 2024 Jan; 169():713-732. PubMed ID: 37976595
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Extremely Sparse Networks via Binary Augmented Pruning for Fast Image Classification.
    Wang P; Li F; Li G; Cheng J
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4167-4180. PubMed ID: 34752405
    [TBL] [Abstract][Full Text] [Related]  

  • 26. LAP: Latency-aware automated pruning with dynamic-based filter selection.
    Chen Z; Liu C; Yang W; Li K; Li K
    Neural Netw; 2022 Aug; 152():407-418. PubMed ID: 35609502
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors.
    Junaid M; Arslan S; Lee T; Kim H
    Sensors (Basel); 2022 Feb; 22(3):. PubMed ID: 35161975
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Toward Full-Stack Acceleration of Deep Convolutional Neural Networks on FPGAs.
    Liu S; Fan H; Ferianc M; Niu X; Shi H; Luk W
    IEEE Trans Neural Netw Learn Syst; 2022 Aug; 33(8):3974-3987. PubMed ID: 33577458
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Designing Deep Learning Hardware Accelerator and Efficiency Evaluation.
    Qi Z; Chen W; Naqvi RA; Siddique K
    Comput Intell Neurosci; 2022; 2022():1291103. PubMed ID: 35875766
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Efficient FPGA Implementation of Convolutional Neural Networks and Long Short-Term Memory for Radar Emitter Signal Recognition.
    Wu B; Wu X; Li P; Gao Y; Si J; Al-Dhahir N
    Sensors (Basel); 2024 Jan; 24(3):. PubMed ID: 38339606
    [TBL] [Abstract][Full Text] [Related]  

  • 31. High-Performance Acceleration of 2-D and 3-D CNNs on FPGAs Using Static Block Floating Point.
    Fan H; Liu S; Que Z; Niu X; Luk W
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4473-4487. PubMed ID: 34644253
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Random pruning: channel sparsity by expectation scaling factor.
    Sun C; Chen J; Li Y; Wang W; Ma T
    PeerJ Comput Sci; 2023; 9():e1564. PubMed ID: 37705629
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Interpretable Artificial Intelligence through Locality Guided Neural Networks.
    Tan R; Gao L; Khan N; Guan L
    Neural Netw; 2022 Nov; 155():58-73. PubMed ID: 36041281
    [TBL] [Abstract][Full Text] [Related]  

  • 34. PCA driven mixed filter pruning for efficient convNets.
    Ahmed W; Ansari S; Hanif M; Khalil A
    PLoS One; 2022; 17(1):e0262386. PubMed ID: 35073373
    [TBL] [Abstract][Full Text] [Related]  

  • 35. PSE-Net: Channel pruning for Convolutional Neural Networks with parallel-subnets estimator.
    Wang S; Xie T; Liu H; Zhang X; Cheng J
    Neural Netw; 2024 Jun; 174():106263. PubMed ID: 38547802
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Redundant feature pruning for accelerated inference in deep neural networks.
    Ayinde BO; Inanc T; Zurada JM
    Neural Netw; 2019 Oct; 118():148-158. PubMed ID: 31279285
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Neuron pruning in temporal domain for energy efficient SNN processor design.
    Lew D; Tang H; Park J
    Front Neurosci; 2023; 17():1285914. PubMed ID: 38099202
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Low Complexity Binarized 2D-CNN Classifier for Wearable Edge AI Devices.
    Wong DLT; Li Y; John D; Ho WK; Heng CH
    IEEE Trans Biomed Circuits Syst; 2022 Oct; 16(5):822-831. PubMed ID: 35921347
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.
    Kim Y; Panda P
    Neural Netw; 2021 Dec; 144():686-698. PubMed ID: 34662827
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations.
    Chen S; Zhao Q
    IEEE Trans Pattern Anal Mach Intell; 2019 Dec; 41(12):3048-3056. PubMed ID: 30296213
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 14.