These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 31725393)

  • 1. Compressing Deep Neural Networks With Sparse Matrix Factorization.
    Wu K; Guo Y; Zhang C
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):3828-3838. PubMed ID: 31725393
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Compressing Deep Networks by Neuron Agglomerative Clustering.
    Wang LN; Liu W; Liu X; Zhong G; Roy PP; Dong J; Huang K
    Sensors (Basel); 2020 Oct; 20(21):. PubMed ID: 33114078
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Transformed ℓ
    Ma R; Miao J; Niu L; Zhang P
    Neural Netw; 2019 Nov; 119():286-298. PubMed ID: 31499353
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Learning matrix factorization with scalable distance metric and regularizer.
    Wang S; Zhang Y; Lin X; Su L; Xiao G; Zhu W; Shi Y
    Neural Netw; 2023 Apr; 161():254-266. PubMed ID: 36774864
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Prediction of Compound Profiling Matrices, Part II: Relative Performance of Multitask Deep Learning and Random Forest Classification on the Basis of Varying Amounts of Training Data.
    Rodríguez-Pérez R; Bajorath J
    ACS Omega; 2018 Sep; 3(9):12033-12040. PubMed ID: 30320286
    [TBL] [Abstract][Full Text] [Related]  

  • 7. SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.
    Lee CH; Fedorov I; Rao BD; Garudadri H
    Proc IEEE Int Conf Acoust Speech Signal Process; 2020 May; 2020():5410-5414. PubMed ID: 33162834
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.
    Mocanu DC; Mocanu E; Stone P; Nguyen PH; Gibescu M; Liotta A
    Nat Commun; 2018 Jun; 9(1):2383. PubMed ID: 29921910
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adversarial Margin Maximization Networks.
    Yan Z; Guo Y; Zhang C
    IEEE Trans Pattern Anal Mach Intell; 2021 Apr; 43(4):1129-1139. PubMed ID: 31634825
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Consistent Sparse Deep Learning: Theory and Computation.
    Sun Y; Song Q; Liang F
    J Am Stat Assoc; 2022; 117(540):1981-1995. PubMed ID: 36945326
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Quaternion Factorization Machines: A Lightweight Solution to Intricate Feature Interaction Modeling.
    Chen T; Yin H; Zhang X; Huang Z; Wang Y; Wang M
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4345-4358. PubMed ID: 34665744
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks.
    Zhou Y; Yen GG; Yi Z
    IEEE Trans Cybern; 2021 Mar; 51(3):1626-1638. PubMed ID: 31380778
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Hybrid tensor decomposition in neural network compression.
    Wu B; Wang D; Zhao G; Deng L; Li G
    Neural Netw; 2020 Dec; 132():309-320. PubMed ID: 32977276
    [TBL] [Abstract][Full Text] [Related]  

  • 14. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Brain hierarchy score: Which deep neural networks are hierarchically brain-like?
    Nonaka S; Majima K; Aoki SC; Kamitani Y
    iScience; 2021 Sep; 24(9):103013. PubMed ID: 34522856
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Sparse factorization of square matrices with application to neural attention modeling.
    Khalitov R; Yu T; Cheng L; Yang Z
    Neural Netw; 2022 Aug; 152():160-168. PubMed ID: 35525164
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Direct Feedback Alignment With Sparse Connections for Local Learning.
    Crafton B; Parihar A; Gebhardt E; Raychowdhury A
    Front Neurosci; 2019; 13():525. PubMed ID: 31178689
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A New Aggregation of DNN Sparse and Dense Labeling for Saliency Detection.
    Yan K; Wang X; Kim J; Feng D
    IEEE Trans Cybern; 2021 Dec; 51(12):5907-5920. PubMed ID: 31976925
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Improving efficiency in convolutional neural networks with multilinear filters.
    Tran DT; Iosifidis A; Gabbouj M
    Neural Netw; 2018 Sep; 105():328-339. PubMed ID: 29920430
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Deep Convolutional Neural Networks for large-scale speech tasks.
    Sainath TN; Kingsbury B; Saon G; Soltau H; Mohamed AR; Dahl G; Ramabhadran B
    Neural Netw; 2015 Apr; 64():39-48. PubMed ID: 25439765
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.