These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

118 related articles for article (PubMed ID: 38861836)

  • 1. On energy complexity of fully-connected layers.
    Šíma J; Cabessa J; Vidnerová P
    Neural Netw; 2024 May; 178():106419. PubMed ID: 38861836
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Energy Complexity of Convolutional Neural Networks.
    Šíma J; Vidnerová P; Mrázek V
    Neural Comput; 2024 Jul; 36(8):1601-1625. PubMed ID: 38776959
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Reconfigurable Architecture and Dataflow for Memory Traffic Minimization of CNNs Computation.
    Cheng WK; Liu XY; Wu HT; Pai HY; Chung PY
    Micromachines (Basel); 2021 Nov; 12(11):. PubMed ID: 34832777
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.
    Gokmen T; Onen M; Haensch W
    Front Neurosci; 2017; 11():538. PubMed ID: 29066942
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Cost-effective stochastic MAC circuits for deep neural networks.
    Sim H; Lee J
    Neural Netw; 2019 Sep; 117():152-162. PubMed ID: 31170575
    [TBL] [Abstract][Full Text] [Related]  

  • 6. EDCompress: Energy-Aware Model Compression for Dataflows.
    Wang Z; Luo T; Goh RSM; Zhou JT
    IEEE Trans Neural Netw Learn Syst; 2022 May; PP():. PubMed ID: 35560072
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks.
    Cheng J; Wu J; Leng C; Wang Y; Hu Q
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4730-4743. PubMed ID: 29990226
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Universal mean-field upper bound for the generalization gap of deep neural networks.
    Ariosto S; Pacelli R; Ginelli F; Gherardi M; Rotondo P
    Phys Rev E; 2022 Jun; 105(6-1):064309. PubMed ID: 35854557
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Accelerating Inference of Convolutional Neural Networks Using In-memory Computing.
    Dazzi M; Sebastian A; Benini L; Eleftheriou E
    Front Comput Neurosci; 2021; 15():674154. PubMed ID: 34413731
    [TBL] [Abstract][Full Text] [Related]  

  • 10. LocalDrop: A Hybrid Regularization for Deep Neural Networks.
    Lu Z; Xu C; Du B; Ishida T; Zhang L; Sugiyama M
    IEEE Trans Pattern Anal Mach Intell; 2022 Jul; 44(7):3590-3601. PubMed ID: 33621170
    [TBL] [Abstract][Full Text] [Related]  

  • 11. SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
    Chen X; Zhao Y; Wang Y; Xu P; You H; Li C; Fu Y; Lin Y; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7099-7113. PubMed ID: 35235521
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Towards Convolutional Neural Network Acceleration and Compression Based on
    Wei M; Zhao Y; Chen X; Li C; Lu J
    Sensors (Basel); 2022 Jun; 22(11):. PubMed ID: 35684919
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Efficient Layer-Wise
    Xie X; Zhu M; Lu S; Wang Z
    Micromachines (Basel); 2023 Feb; 14(3):. PubMed ID: 36984936
    [TBL] [Abstract][Full Text] [Related]  

  • 14. ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions.
    Gao H; Wang Z; Cai L; Ji S
    IEEE Trans Pattern Anal Mach Intell; 2021 Aug; 43(8):2570-2581. PubMed ID: 32091991
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Theory of deep convolutional neural networks: Downsampling.
    Zhou DX
    Neural Netw; 2020 Apr; 124():319-327. PubMed ID: 32036229
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.
    Zou C; Cui X; Kuang Y; Liu K; Wang Y; Wang X; Huang R
    Front Neurosci; 2021; 15():694170. PubMed ID: 34867142
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Random sketch learning for deep neural networks in edge computing.
    Li B; Chen P; Liu H; Guo W; Cao X; Du J; Zhao C; Zhang J
    Nat Comput Sci; 2021 Mar; 1(3):221-228. PubMed ID: 38183196
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Block-term tensor neural networks.
    Ye J; Li G; Chen D; Yang H; Zhe S; Xu Z
    Neural Netw; 2020 Oct; 130():11-21. PubMed ID: 32589587
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Understanding and mitigating noise in trained deep neural networks.
    Semenova N; Larger L; Brunner D
    Neural Netw; 2022 Feb; 146():151-160. PubMed ID: 34864223
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Investigation of Deep Spiking Neural Networks Utilizing Gated Schottky Diode as Synaptic Devices.
    Lee ST; Bae JH
    Micromachines (Basel); 2022 Oct; 13(11):. PubMed ID: 36363821
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.