These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

200 related articles for article (PubMed ID: 32164152)

  • 1. Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic.
    Cococcioni M; Rossi F; Ruffaldi E; Saponara S
    Sensors (Basel); 2020 Mar; 20(5):. PubMed ID: 32164152
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Expressive power of ReLU and step networks under floating-point operations.
    Park Y; Hwang G; Lee W; Park S
    Neural Netw; 2024 Jul; 175():106297. PubMed ID: 38643619
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Number Formats, Error Mitigation, and Scope for 16-Bit Arithmetics in Weather and Climate Modeling Analyzed With a Shallow Water Model.
    Klöwer M; Düben PD; Palmer TN
    J Adv Model Earth Syst; 2020 Oct; 12(10):e2020MS002246. PubMed ID: 33282116
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training.
    Junaid M; Aliev H; Park S; Kim H; Yoo H; Sim S
    Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610356
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Training high-performance and large-scale deep neural networks with full 8-bit integers.
    Yang Y; Deng L; Wu S; Yan T; Xie Y; Li G
    Neural Netw; 2020 May; 125():70-82. PubMed ID: 32070857
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
    Sarkar M; Ghosal P; Mohanty SP
    IEEE Trans Nanobioscience; 2017 Sep; 16(6):383-399. PubMed ID: 28715334
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.
    Neville RS; Stonham TJ; Glover RJ
    Neural Netw; 2000 Jan; 13(1):91-110. PubMed ID: 10935462
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors.
    Junaid M; Arslan S; Lee T; Kim H
    Sensors (Basel); 2022 Feb; 22(3):. PubMed ID: 35161975
    [TBL] [Abstract][Full Text] [Related]  

  • 9. L1 -Norm Batch Normalization for Efficient Training of Deep Neural Networks.
    Wu S; Li G; Deng L; Liu L; Wu D; Xie Y; Shi L
    IEEE Trans Neural Netw Learn Syst; 2019 Jul; 30(7):2043-2051. PubMed ID: 30418924
    [TBL] [Abstract][Full Text] [Related]  

  • 10. High-Performance Acceleration of 2-D and 3-D CNNs on FPGAs Using Static Block Floating Point.
    Fan H; Liu S; Que Z; Niu X; Luk W
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4473-4487. PubMed ID: 34644253
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A blueprint for precise and fault-tolerant analog neural networks.
    Demirkiran C; Nair L; Bunandar D; Joshi A
    Nat Commun; 2024 Jun; 15(1):5098. PubMed ID: 38877006
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Fluid Simulations Accelerated With 16 Bits: Approaching 4x Speedup on A64FX by Squeezing ShallowWaters.jl Into Float16.
    Klöwer M; Hatfield S; Croci M; Düben PD; Palmer TN
    J Adv Model Earth Syst; 2022 Feb; 14(2):e2021MS002684. PubMed ID: 35866041
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Discontinuities in recurrent neural networks.
    Gavaldá R; Siegelmann HT
    Neural Comput; 1999 Apr; 11(3):715-46. PubMed ID: 10085427
    [TBL] [Abstract][Full Text] [Related]  

  • 14. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Design of a DNA-based reversible arithmetic and logic unit.
    Sarker A; Hasan Babu HM; Rashid SM
    IET Nanobiotechnol; 2015 Aug; 9(4):226-38. PubMed ID: 26224353
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Accuracy and performance of the lattice Boltzmann method with 64-bit, 32-bit, and customized 16-bit number formats.
    Lehmann M; Krause MJ; Amati G; Sega M; Harting J; Gekle S
    Phys Rev E; 2022 Jul; 106(1-2):015308. PubMed ID: 35974647
    [TBL] [Abstract][Full Text] [Related]  

  • 17. DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.
    Chen X; Zhou M; Gong Z; Xu W; Liu X; Huang T; Zhen Z; Liu J
    Front Comput Neurosci; 2020; 14():580632. PubMed ID: 33328946
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Mixed-precision weights network for field-programmable gate array.
    Fuengfusin N; Tamukoh H
    PLoS One; 2021; 16(5):e0251329. PubMed ID: 33970965
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Arithmetic logic unit based on the metastructure with coherent absorption.
    Zou JH; Sui JY; Chen Q; Zhang HF
    Opt Lett; 2023 Nov; 48(21):5699-5702. PubMed ID: 37910737
    [TBL] [Abstract][Full Text] [Related]  

  • 20. BE-CALF: Bit-Depth Enhancement by Concatenating All Level Features of DNN.
    Liu J; Sun W; Su Y; Jing P; Yang X
    IEEE Trans Image Process; 2019 Oct; 28(10):4926-4940. PubMed ID: 31094688
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.