These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

129 related articles for article (PubMed ID: 31714239)

  • 1. Singular Values for ReLU Layers.
    Dittmer S; King EJ; Maass P
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3594-3605. PubMed ID: 31714239
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 3. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Random Sketching for Neural Networks With ReLU.
    Wang D; Zeng J; Lin SB
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):748-762. PubMed ID: 32275612
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Role of Layers and Neurons in Deep Learning With the Rectified Linear Unit.
    Takekawa A; Kajiura M; Fukuda H
    Cureus; 2021 Oct; 13(10):e18866. PubMed ID: 34820210
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Analytical Bounds on the Local Lipschitz Constants of ReLU Networks.
    Avant T; Morgansen KA
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37368808
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Convergence of deep convolutional neural networks.
    Xu Y; Zhang H
    Neural Netw; 2022 Sep; 153():553-563. PubMed ID: 35839599
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Recursion Newton-Like Algorithm for l
    Zhang H; Yuan Z; Xiu N
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5882-5896. PubMed ID: 34898441
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Consideration on the learning efficiency of multiple-layered neural networks with linear units.
    Aoyagi M
    Neural Netw; 2024 Apr; 172():106132. PubMed ID: 38278091
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Optimization of Microchannels and Application of Basic Activation Functions of Deep Neural Network for Accuracy Analysis of Microfluidic Parameter Data.
    Ahmed F; Shimizu M; Wang J; Sakai K; Kiwa T
    Micromachines (Basel); 2022 Aug; 13(8):. PubMed ID: 36014274
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Training a Two-Layer ReLU Network Analytically.
    Barbu A
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112413
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Multimodal transistors as ReLU activation functions in physical neural network classifiers.
    Surekcigil Pesch I; Bestelink E; de Sagazan O; Mehonic A; Sporea RA
    Sci Rep; 2022 Jan; 12(1):670. PubMed ID: 35027631
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Optical ReLU using membrane lasers for an all-optical neural network.
    Takahashi N; Fang W; Xue R; Okada S; Ohiso Y; Amemiya T; Nishiyama N
    Opt Lett; 2022 Nov; 47(21):5715-5718. PubMed ID: 37219311
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Classification of Alzheimer's Disease Based on Eight-Layer Convolutional Neural Network with Leaky Rectified Linear Unit and Max Pooling.
    Wang SH; Phillips P; Sui Y; Liu B; Yang M; Cheng H
    J Med Syst; 2018 Mar; 42(5):85. PubMed ID: 29577169
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Prediction of bubble departing diameter in pool boiling of mixtures by ANN using modified ReLU.
    Fazel SAA
    Heliyon; 2024 Jun; 10(11):e31261. PubMed ID: 38832267
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Studying the Evolution of Neural Activation Patterns During Training of Feed-Forward ReLU Networks.
    Hartmann D; Franzen D; Brodehl S
    Front Artif Intell; 2021; 4():642374. PubMed ID: 35005614
    [TBL] [Abstract][Full Text] [Related]  

  • 20. On minimal representations of shallow ReLU networks.
    Dereich S; Kassing S
    Neural Netw; 2022 Apr; 148():121-128. PubMed ID: 35123261
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.