These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

122 related articles for article (PubMed ID: 38877006)

  • 1. A blueprint for precise and fault-tolerant analog neural networks.
    Demirkiran C; Nair L; Bunandar D; Joshi A
    Nat Commun; 2024 Jun; 15(1):5098. PubMed ID: 38877006
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.
    Junaid M; Aliev H; Park S; Kim H; Yoo H; Sim S
    Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610356
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.
    Stromatias E; Neil D; Pfeiffer M; Galluppi F; Furber SB; Liu SC
    Front Neurosci; 2015; 9():222. PubMed ID: 26217169
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Bulk-Switching Memristor-Based Compute-In-Memory Module for Deep Neural Network Training.
    Wu Y; Wang Q; Wang Z; Wang X; Ayyagari B; Krishnan S; Chudzik M; Lu WD
    Adv Mater; 2023 Nov; 35(46):e2305465. PubMed ID: 37747134
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Designing Efficient Bit-Level Sparsity-Tolerant Memristive Networks.
    Lyu B; Wen S; Yang Y; Chang X; Sun J; Chen Y; Huang T
    IEEE Trans Neural Netw Learn Syst; 2023 Mar; PP():. PubMed ID: 37030854
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Enabling Training of Neural Networks on Noisy Hardware.
    Gokmen T
    Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Bitstream-Based Neural Network for Scalable, Efficient, and Accurate Deep Learning Hardware.
    Sim H; Lee J
    Front Neurosci; 2020; 14():543472. PubMed ID: 33424530
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Cost-effective stochastic MAC circuits for deep neural networks.
    Sim H; Lee J
    Neural Netw; 2019 Sep; 117():152-162. PubMed ID: 31170575
    [TBL] [Abstract][Full Text] [Related]  

  • 9. SCA: Search-Based Computing Hardware Architecture with Precision Scalable and Computation Reconfigurable Scheme.
    Chang L; Zhao X; Zhou J
    Sensors (Basel); 2022 Nov; 22(21):. PubMed ID: 36366242
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic.
    Cococcioni M; Rossi F; Ruffaldi E; Saponara S
    Sensors (Basel); 2020 Mar; 20(5):. PubMed ID: 32164152
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices.
    Spoon K; Tsai H; Chen A; Rasch MJ; Ambrogio S; Mackin C; Fasoli A; Friz AM; Narayanan P; Stanisavljevic M; Burr GW
    Front Comput Neurosci; 2021; 15():675741. PubMed ID: 34290595
    [TBL] [Abstract][Full Text] [Related]  

  • 13. SalvageDNN: salvaging deep neural network accelerators with permanent faults through saliency-driven fault-aware mapping.
    Abdullah Hanif M; Shafique M
    Philos Trans A Math Phys Eng Sci; 2020 Feb; 378(2164):20190164. PubMed ID: 31865875
    [TBL] [Abstract][Full Text] [Related]  

  • 14. High-Performance Method and Architecture for Attention Computation in DNN Inference.
    Cheng Q; Hu X; Xiao H; Zhou Y; Duan S
    IEEE Trans Biomed Circuits Syst; 2024 Aug; PP():. PubMed ID: 39088504
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Programming memristor arrays with arbitrarily high precision for analog computing.
    Song W; Rao M; Li Y; Li C; Zhuo Y; Cai F; Wu M; Yin W; Li Z; Wei Q; Lee S; Zhu H; Gong L; Barnell M; Wu Q; Beerel PA; Chen MS; Ge N; Hu M; Xia Q; Yang JJ
    Science; 2024 Feb; 383(6685):903-910. PubMed ID: 38386733
    [TBL] [Abstract][Full Text] [Related]  

  • 16. ETA: An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization.
    Lu J; Ni C; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7660-7674. PubMed ID: 35133969
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Exploiting Retraining-Based Mixed-Precision Quantization for Low-Cost DNN Accelerator Design.
    Kim N; Shin D; Choi W; Kim G; Park J
    IEEE Trans Neural Netw Learn Syst; 2021 Jul; 32(7):2925-2938. PubMed ID: 32745007
    [TBL] [Abstract][Full Text] [Related]  

  • 18. SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
    Chen X; Zhao Y; Wang Y; Xu P; You H; Li C; Fu Y; Lin Y; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7099-7113. PubMed ID: 35235521
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Hardware-Efficient Stochastic Binary CNN Architectures for Near-Sensor Computing.
    Parmar V; Penkovsky B; Querlioz D; Suri M
    Front Neurosci; 2021; 15():781786. PubMed ID: 35069101
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.
    Dorzhigulov A; Saxena V
    Front Neurosci; 2023; 17():1177592. PubMed ID: 37534034
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.