These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

107 related articles for article (PubMed ID: 37027555)

  • 41. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Enabling Training of Neural Networks on Noisy Hardware.
    Gokmen T
    Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
    [TBL] [Abstract][Full Text] [Related]  

  • 43. A scalable discrete-time survival model for neural networks.
    Gensheimer MF; Narasimhan B
    PeerJ; 2019; 7():e6257. PubMed ID: 30701130
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Incremental Trainable Parameter Selection in Deep Neural Networks.
    Thakur A; Abrol V; Sharma P; Zhu T; Clifton DA
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):6478-6491. PubMed ID: 36219657
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Extracting stochastic governing laws by non-local Kramers-Moyal formulae.
    Lu Y; Li Y; Duan J
    Philos Trans A Math Phys Eng Sci; 2022 Aug; 380(2229):20210195. PubMed ID: 35719068
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Weak-noise limit of a piecewise-smooth stochastic differential equation.
    Chen Y; Baule A; Touchette H; Just W
    Phys Rev E Stat Nonlin Soft Matter Phys; 2013 Nov; 88(5):052103. PubMed ID: 24329210
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Strong stochastic persistence of some Lévy-driven Lotka-Volterra systems.
    Videla L
    J Math Biol; 2022 Jan; 84(3):11. PubMed ID: 35022843
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Distinguishing between fractional Brownian motion with random and constant Hurst exponent using sample autocovariance-based statistics.
    Grzesiek A; Gajda J; Thapa S; Wyłomańska A
    Chaos; 2024 Apr; 34(4):. PubMed ID: 38668586
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Estimation of Granger causality through Artificial Neural Networks: applications to physiological systems and chaotic electronic oscillators.
    Antonacci Y; Minati L; Faes L; Pernice R; Nollo G; Toppi J; Pietrabissa A; Astolfi L
    PeerJ Comput Sci; 2021; 7():e429. PubMed ID: 34084917
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Drill the Cork of Information Bottleneck by Inputting the Most Important Data.
    Peng X; Zhang J; Wang FY; Li L
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6360-6372. PubMed ID: 34029196
    [TBL] [Abstract][Full Text] [Related]  

  • 51. Selecting the best optimizers for deep learning-based medical image segmentation.
    Mortazi A; Cicek V; Keles E; Bagci U
    Front Radiol; 2023; 3():1175473. PubMed ID: 37810757
    [TBL] [Abstract][Full Text] [Related]  

  • 52. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
    Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D
    Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247
    [TBL] [Abstract][Full Text] [Related]  

  • 53. An Efficient Preconditioner for Stochastic Gradient Descent Optimization of Image Registration.
    Qiao Y; Lelieveldt BPF; Staring M
    IEEE Trans Med Imaging; 2019 Oct; 38(10):2314-2325. PubMed ID: 30762536
    [TBL] [Abstract][Full Text] [Related]  

  • 54. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems.
    Angelini MC; Cavaliere AG; Marino R; Ricci-Tersenghi F
    Sci Rep; 2024 May; 14(1):11638. PubMed ID: 38773255
    [TBL] [Abstract][Full Text] [Related]  

  • 56. diffGrad: An Optimization Method for Convolutional Neural Networks.
    Dubey SR; Chakraborty S; Roy SK; Mukherjee S; Singh SK; Chaudhuri BB
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4500-4511. PubMed ID: 31880565
    [TBL] [Abstract][Full Text] [Related]  

  • 57. An Improvised Sentiment Analysis Model on Twitter Data Using Stochastic Gradient Descent (SGD) Optimization Algorithm in Stochastic Gate Neural Network (SGNN).
    Vidyashree KP; Rajendra AB
    SN Comput Sci; 2023; 4(2):190. PubMed ID: 36748096
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Rotation algorithm: generation of Gaussian self-similar stochastic processes.
    Vahabi M; Jafari GR
    Phys Rev E Stat Nonlin Soft Matter Phys; 2012 Dec; 86(6 Pt 2):066704. PubMed ID: 23368075
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Learning from Data with Heterogeneous Noise using SGD.
    Song S; Chaudhuri K; Sarwate AD
    JMLR Workshop Conf Proc; 2015 Feb; 2015():894-902. PubMed ID: 26705435
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Low Complexity Gradient Computation Techniques to Accelerate Deep Neural Network Training.
    Shin D; Kim G; Jo J; Park J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5745-5759. PubMed ID: 34890336
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.