These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 38392358)

  • 21. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 22. On the Robustness of Bayesian Neural Networks to Adversarial Attacks.
    Bortolussi L; Carbone G; Laurenti L; Patane A; Sanguinetti G; Wicker M
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; PP():. PubMed ID: 38648123
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning.
    Everett M; Lutjens B; How JP
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4184-4198. PubMed ID: 33587714
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Toward Certified Robustness of Distance Metric Learning.
    Yang X; Guo Y; Dong M; Xue JH
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):3834-3844. PubMed ID: 36112549
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Evaluation of GAN-Based Model for Adversarial Training.
    Zhao W; Mahmoud QH; Alwidian S
    Sensors (Basel); 2023 Mar; 23(5):. PubMed ID: 36904900
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Adversarially Robust Learning
    Jagatap G; Joshi A; Chowdhury AB; Garg S; Hegde C
    Front Artif Intell; 2021; 4():780843. PubMed ID: 35059637
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Robustness meets accuracy in adversarial training for graph autoencoder.
    Zhou X; Hu K; Wang H
    Neural Netw; 2023 Jan; 157():114-124. PubMed ID: 36334533
    [TBL] [Abstract][Full Text] [Related]  

  • 28. On the role of deep learning model complexity in adversarial robustness for medical images.
    Rodriguez D; Nayak T; Chen Y; Krishnan R; Huang Y
    BMC Med Inform Decis Mak; 2022 Jun; 22(Suppl 2):160. PubMed ID: 35725429
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Implicit adversarial data augmentation and robustness with Noise-based Learning.
    Panda P; Roy K
    Neural Netw; 2021 Sep; 141():120-132. PubMed ID: 33894652
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Regularization Meets Enhanced Multi-Stage Fusion Features: Making CNN More Robust against White-Box Adversarial Attacks.
    Zhang J; Maeda K; Ogawa T; Haseyama M
    Sensors (Basel); 2022 Jul; 22(14):. PubMed ID: 35891112
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Training Provably Robust Models by Polyhedral Envelope Regularization.
    Liu C; Salzmann M; Susstrunk S
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; 34(6):3146-3160. PubMed ID: 34699369
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Boosting adversarial robustness via self-paced adversarial training.
    He L; Ai Q; Yang X; Ren Y; Wang Q; Xu Z
    Neural Netw; 2023 Oct; 167():706-714. PubMed ID: 37729786
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Adversarial Examples for Hamming Space Search.
    Yang E; Liu T; Deng C; Tao D
    IEEE Trans Cybern; 2020 Apr; 50(4):1473-1484. PubMed ID: 30561358
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Adversarial example defense based on image reconstruction.
    Zhang YA; Xu H; Pei C; Yang G
    PeerJ Comput Sci; 2021; 7():e811. PubMed ID: 35036533
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Attention-based investigation and solution to the trade-off issue of adversarial training.
    Shao C; Li W; Huo J; Feng Z; Gao Y
    Neural Netw; 2024 Jun; 174():106224. PubMed ID: 38479186
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.
    Tuna OF; Catak FO; Eskil MT
    Multimed Tools Appl; 2022; 81(8):11479-11500. PubMed ID: 35221776
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 40. How adversarial attacks can disrupt seemingly stable accurate classifiers.
    Sutton OJ; Zhou Q; Tyukin IY; Gorban AN; Bastounis A; Higham DJ
    Neural Netw; 2024 Dec; 180():106711. PubMed ID: 39299037
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.