These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

111 related articles for article (PubMed ID: 35005614)

  • 1. Studying the Evolution of Neural Activation Patterns During Training of Feed-Forward ReLU Networks.
    Hartmann D; Franzen D; Brodehl S
    Front Artif Intell; 2021; 4():642374. PubMed ID: 35005614
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Convergence of deep convolutional neural networks.
    Xu Y; Zhang H
    Neural Netw; 2022 Sep; 153():553-563. PubMed ID: 35839599
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Deep Convolutional Neural Networks for large-scale speech tasks.
    Sainath TN; Kingsbury B; Saon G; Soltau H; Mohamed AR; Dahl G; Ramabhadran B
    Neural Netw; 2015 Apr; 64():39-48. PubMed ID: 25439765
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Improved object recognition using neural networks trained to mimic the brain's statistical properties.
    Federer C; Xu H; Fyshe A; Zylberberg J
    Neural Netw; 2020 Nov; 131():103-114. PubMed ID: 32771841
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Compact and Computationally Efficient Representation of Deep Neural Networks.
    Wiedemann S; Muller KR; Samek W
    IEEE Trans Neural Netw Learn Syst; 2020 Mar; 31(3):772-785. PubMed ID: 31150347
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Singular Values for ReLU Layers.
    Dittmer S; King EJ; Maass P
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3594-3605. PubMed ID: 31714239
    [TBL] [Abstract][Full Text] [Related]  

  • 10. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
    Deng L; Jiao P; Pei J; Wu Z; Li G
    Neural Netw; 2018 Apr; 100():49-58. PubMed ID: 29471195
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Training Lightweight Deep Convolutional Neural Networks Using Bag-of-Features Pooling.
    Passalis N; Tefas A
    IEEE Trans Neural Netw Learn Syst; 2019 Jun; 30(6):1705-1715. PubMed ID: 30369453
    [TBL] [Abstract][Full Text] [Related]  

  • 12. An adiabatic method to train binarized artificial neural networks.
    Zhao Y; Xiao J
    Sci Rep; 2021 Oct; 11(1):19797. PubMed ID: 34611220
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.
    Kim Y; Panda P
    Neural Netw; 2021 Dec; 144():686-698. PubMed ID: 34662827
    [TBL] [Abstract][Full Text] [Related]  

  • 15. MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.
    Mutasa S; Chang PD; Ruzal-Shapiro C; Ayyala R
    J Digit Imaging; 2018 Aug; 31(4):513-519. PubMed ID: 29404850
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
    Zhang S; Wang M; Xiong J; Liu S; Chen PY
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2622-2635. PubMed ID: 32726280
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Initializing photonic feed-forward neural networks using auxiliary tasks.
    Passalis N; Mourgias-Alexandris G; Pleros N; Tefas A
    Neural Netw; 2020 Sep; 129():103-108. PubMed ID: 32504819
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Multimodal transistors as ReLU activation functions in physical neural network classifiers.
    Surekcigil Pesch I; Bestelink E; de Sagazan O; Mehonic A; Sporea RA
    Sci Rep; 2022 Jan; 12(1):670. PubMed ID: 35027631
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Cellular automata as convolutional neural networks.
    Gilpin W
    Phys Rev E; 2019 Sep; 100(3-1):032402. PubMed ID: 31639995
    [TBL] [Abstract][Full Text] [Related]  

  • 20. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.