BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

106 related articles for article (PubMed ID: 36772365)

  • 1. Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks.
    Avgerinos C; Vretos N; Daras P
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772365
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks.
    Sardar N; Khan S; Hintze A; Mehra P
    Entropy (Basel); 2023 Jun; 25(6):. PubMed ID: 37372277
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Shakeout: A New Approach to Regularized Deep Neural Network Training.
    Kang G; Li J; Tao D
    IEEE Trans Pattern Anal Mach Intell; 2018 May; 40(5):1245-1258. PubMed ID: 28489533
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Supervised Deep Sparse Coding Networks for Image Classification.
    Sun X; Nasrabadi NM; Tran TD
    IEEE Trans Image Process; 2019 Jul; ():. PubMed ID: 31331886
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The Dropout Learning Algorithm.
    Baldi P; Sadowski P
    Artif Intell; 2014 May; 210():78-122. PubMed ID: 24771879
    [TBL] [Abstract][Full Text] [Related]  

  • 6. EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks.
    Salehinejad H; Valaee S
    IEEE Trans Neural Netw Learn Syst; 2022 Oct; 33(10):5279-5292. PubMed ID: 33830931
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Regularization of deep neural networks with spectral dropout.
    Khan SH; Hayat M; Porikli F
    Neural Netw; 2019 Feb; 110():82-90. PubMed ID: 30504041
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Transformed ℓ
    Ma R; Miao J; Niu L; Zhang P
    Neural Netw; 2019 Nov; 119():286-298. PubMed ID: 31499353
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Implicit Regularization of Dropout.
    Zhang Z; Xu ZJ
    IEEE Trans Pattern Anal Mach Intell; 2024 Jun; 46(6):4206-4217. PubMed ID: 38261480
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Forward propagation dropout in deep neural networks using Jensen-Shannon and random forest feature importance ranking.
    Heidari M; Moattar MH; Ghaffari H
    Neural Netw; 2023 Aug; 165():238-247. PubMed ID: 37307667
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Advanced Dropout: A Model-Free Methodology for Bayesian Dropout Optimization.
    Xie J; Ma Z; Lei J; Zhang G; Xue JH; Tan ZH; Guo J
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):4605-4625. PubMed ID: 34029187
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process.
    Saffari M; Khodayar M; Ebrahimi Saadabadi MS; Sequeira AF; Cardoso JS
    Sensors (Basel); 2021 Mar; 21(5):. PubMed ID: 33800810
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The Stochastic Delta Rule: Faster and More Accurate Deep Learning Through Adaptive Weight Noise.
    Frazier-Logue N; Hanson SJ
    Neural Comput; 2020 May; 32(5):1018-1032. PubMed ID: 32187001
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Hybridized sine cosine algorithm with convolutional neural networks dropout regularization application.
    Bacanin N; Zivkovic M; Al-Turjman F; Venkatachalam K; Trojovský P; Strumberger I; Bezdan T
    Sci Rep; 2022 Apr; 12(1):6302. PubMed ID: 35440609
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Flipover outperforms dropout in deep learning.
    Liang Y; Niu C; Yan P; Wang G
    Vis Comput Ind Biomed Art; 2024 Feb; 7(1):4. PubMed ID: 38386109
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Equivalence between dropout and data augmentation: A mathematical check.
    Zhao D; Yu G; Xu P; Luo M
    Neural Netw; 2019 Jul; 115():82-89. PubMed ID: 30978610
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adaptive Dropout Method Based on Biological Principles.
    Li H; Weng J; Mao Y; Wang Y; Zhan Y; Cai Q; Gu W
    IEEE Trans Neural Netw Learn Syst; 2021 Sep; 32(9):4267-4276. PubMed ID: 33872159
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Reducing the U-Net size for practical scenarios: Virus recognition in electron microscopy images.
    Matuszewski DJ; Sintorn IM
    Comput Methods Programs Biomed; 2019 Sep; 178():31-39. PubMed ID: 31416558
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Automatic Sparse Connectivity Learning for Neural Networks.
    Tang Z; Luo L; Xie B; Zhu Y; Zhao R; Bi L; Lu C
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7350-7364. PubMed ID: 35073273
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.