These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

122 related articles for article (PubMed ID: 31725393)

  • 1. Compressing Deep Neural Networks With Sparse Matrix Factorization.
    Wu K; Guo Y; Zhang C
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):3828-3838. PubMed ID: 31725393
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Compressing Deep Networks by Neuron Agglomerative Clustering.
    Wang LN; Liu W; Liu X; Zhong G; Roy PP; Dong J; Huang K
    Sensors (Basel); 2020 Oct; 20(21):. PubMed ID: 33114078
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Transformed ℓ
    Ma R; Miao J; Niu L; Zhang P
    Neural Netw; 2019 Nov; 119():286-298. PubMed ID: 31499353
    [TBL] [Abstract][Full Text] [Related]  

  • 5. SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.
    Lee CH; Fedorov I; Rao BD; Garudadri H
    Proc IEEE Int Conf Acoust Speech Signal Process; 2020 May; 2020():5410-5414. PubMed ID: 33162834
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Learning matrix factorization with scalable distance metric and regularizer.
    Wang S; Zhang Y; Lin X; Su L; Xiao G; Zhu W; Shi Y
    Neural Netw; 2023 Apr; 161():254-266. PubMed ID: 36774864
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Quaternion Factorization Machines: A Lightweight Solution to Intricate Feature Interaction Modeling.
    Chen T; Yin H; Zhang X; Huang Z; Wang Y; Wang M
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4345-4358. PubMed ID: 34665744
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Prediction of Compound Profiling Matrices, Part II: Relative Performance of Multitask Deep Learning and Random Forest Classification on the Basis of Varying Amounts of Training Data.
    Rodríguez-Pérez R; Bajorath J
    ACS Omega; 2018 Sep; 3(9):12033-12040. PubMed ID: 30320286
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.
    Mocanu DC; Mocanu E; Stone P; Nguyen PH; Gibescu M; Liotta A
    Nat Commun; 2018 Jun; 9(1):2383. PubMed ID: 29921910
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Adversarial Margin Maximization Networks.
    Yan Z; Guo Y; Zhang C
    IEEE Trans Pattern Anal Mach Intell; 2021 Apr; 43(4):1129-1139. PubMed ID: 31634825
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Consistent Sparse Deep Learning: Theory and Computation.
    Sun Y; Song Q; Liang F
    J Am Stat Assoc; 2022; 117(540):1981-1995. PubMed ID: 36945326
    [TBL] [Abstract][Full Text] [Related]  

  • 12. DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains.
    Chen X; Zhou M; Gong Z; Xu W; Liu X; Huang T; Zhen Z; Liu J
    Front Comput Neurosci; 2020; 14():580632. PubMed ID: 33328946
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Neural Classifiers with Limited Connectivity and Recurrent Readouts.
    Kushnir L; Fusi S
    J Neurosci; 2018 Nov; 38(46):9900-9924. PubMed ID: 30249794
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks.
    Zhou Y; Yen GG; Yi Z
    IEEE Trans Cybern; 2021 Mar; 51(3):1626-1638. PubMed ID: 31380778
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics.
    He T; Kong R; Holmes AJ; Nguyen M; Sabuncu MR; Eickhoff SB; Bzdok D; Feng J; Yeo BTT
    Neuroimage; 2020 Feb; 206():116276. PubMed ID: 31610298
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Hybrid tensor decomposition in neural network compression.
    Wu B; Wang D; Zhao G; Deng L; Li G
    Neural Netw; 2020 Dec; 132():309-320. PubMed ID: 32977276
    [TBL] [Abstract][Full Text] [Related]  

  • 17. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Brain hierarchy score: Which deep neural networks are hierarchically brain-like?
    Nonaka S; Majima K; Aoki SC; Kamitani Y
    iScience; 2021 Sep; 24(9):103013. PubMed ID: 34522856
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment.
    Feng F; Hou L; She Q; Chan RHM; Kwok JT
    IEEE Trans Neural Netw Learn Syst; 2024 Jul; 35(7):8999-9013. PubMed ID: 36342998
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops.
    Stelzer F; Röhm A; Vicente R; Fischer I; Yanchuk S
    Nat Commun; 2021 Aug; 12(1):5164. PubMed ID: 34453053
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.