These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

141 related articles for article (PubMed ID: 34890336)

  • 41. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.
    Goldt S; Advani MS; Saxe AM; Krzakala F; Zdeborová L
    J Stat Mech; 2020 Dec; 2020(12):124010. PubMed ID: 34262607
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Cost-effective stochastic MAC circuits for deep neural networks.
    Sim H; Lee J
    Neural Netw; 2019 Sep; 117():152-162. PubMed ID: 31170575
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Enhancing deep neural network training efficiency and performance through linear prediction.
    Ying H; Song M; Tang Y; Xiao S; Xiao Z
    Sci Rep; 2024 Jul; 14(1):15197. PubMed ID: 38956088
    [TBL] [Abstract][Full Text] [Related]  

  • 44. SGD-Based Adaptive NN Control Design for Uncertain Nonlinear Systems.
    Yang X; Zheng X; Gao H
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):5071-5083. PubMed ID: 29994566
    [TBL] [Abstract][Full Text] [Related]  

  • 45. MONETA: A Processing-In-Memory-Based Hardware Platform for the Hybrid Convolutional Spiking Neural Network With Online Learning.
    Kim D; Chakraborty B; She X; Lee E; Kang B; Mukhopadhyay S
    Front Neurosci; 2022; 16():775457. PubMed ID: 35478844
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Prediction of metasurface spectral response based on a deep neural network.
    Chen Y; Ding Z; Wang J; Zhou J; Zhang M
    Opt Lett; 2022 Oct; 47(19):5092-5095. PubMed ID: 36181194
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Data classification based on fractional order gradient descent with momentum for RBF neural network.
    Xue H; Shao Z; Sun H
    Network; 2020; 31(1-4):166-185. PubMed ID: 33283569
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training.
    Liu C; Zhang X; Zhang R; Li L; Zhou S; Huang D; Li Z; Du Z; Liu S; Chen T
    IEEE Trans Image Process; 2022; 31():7006-7019. PubMed ID: 36322492
    [TBL] [Abstract][Full Text] [Related]  

  • 49. A(DP)
    Xu J; Zhang W; Wang F
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8036-8047. PubMed ID: 34449356
    [TBL] [Abstract][Full Text] [Related]  

  • 50. SalvageDNN: salvaging deep neural network accelerators with permanent faults through saliency-driven fault-aware mapping.
    Abdullah Hanif M; Shafique M
    Philos Trans A Math Phys Eng Sci; 2020 Feb; 378(2164):20190164. PubMed ID: 31865875
    [TBL] [Abstract][Full Text] [Related]  

  • 51. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
    [TBL] [Abstract][Full Text] [Related]  

  • 52. A Low-Power DNN Accelerator Enabled by a Novel Staircase RRAM Array.
    Veluri H; Chand U; Li Y; Tang B; Thean AV
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4416-4427. PubMed ID: 34669580
    [TBL] [Abstract][Full Text] [Related]  

  • 53. SCA: Search-Based Computing Hardware Architecture with Precision Scalable and Computation Reconfigurable Scheme.
    Chang L; Zhao X; Zhou J
    Sensors (Basel); 2022 Nov; 22(21):. PubMed ID: 36366242
    [TBL] [Abstract][Full Text] [Related]  

  • 54. Hardware-Efficient Stochastic Binary CNN Architectures for Near-Sensor Computing.
    Parmar V; Penkovsky B; Querlioz D; Suri M
    Front Neurosci; 2021; 15():781786. PubMed ID: 35069101
    [TBL] [Abstract][Full Text] [Related]  

  • 55. A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks.
    Long X; Zeng X; Ben Z; Zhou D; Zhang M
    Comput Intell Neurosci; 2020; 2020():7839064. PubMed ID: 32148472
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Unsupervised Network Quantization via Fixed-Point Factorization.
    Wang P; He X; Chen Q; Cheng A; Liu Q; Cheng J
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2706-2720. PubMed ID: 32706647
    [TBL] [Abstract][Full Text] [Related]  

  • 57. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.
    Bernal J; Torres-Jimenez J
    J Res Natl Inst Stand Technol; 2015; 120():113-28. PubMed ID: 26958442
    [TBL] [Abstract][Full Text] [Related]  

  • 58. The Limiting Dynamics of SGD: Modified Loss, Phase-Space Oscillations, and Anomalous Diffusion.
    Kunin D; Sagastuy-Brena J; Gillespie L; Margalit E; Tanaka H; Ganguli S; Yamins DLK
    Neural Comput; 2023 Dec; 36(1):151-174. PubMed ID: 38052080
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Neural Network Training Acceleration With RRAM-Based Hybrid Synapses.
    Choi W; Kwak M; Kim S; Hwang H
    Front Neurosci; 2021; 15():690418. PubMed ID: 34248492
    [TBL] [Abstract][Full Text] [Related]  

  • 60. A Progressive Subnetwork Searching Framework for Dynamic Inference.
    Yang L; He Z; Cao Y; Fan D
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3809-3820. PubMed ID: 36063528
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.