These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

133 related articles for article (PubMed ID: 38587963)

  • 1. Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks.
    Zhang L; Zhou Y; Yang Y; Gao X
    IEEE Trans Pattern Anal Mach Intell; 2024 Oct; 46(10):6669-6687. PubMed ID: 38587963
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Image Super-Resolution as a Defense Against Adversarial Attacks.
    Mustafa A; Khan SH; Hayat M; Shen J; Shao L
    IEEE Trans Image Process; 2019 Sep; ():. PubMed ID: 31545722
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 7. ApaNet: adversarial perturbations alleviation network for face verification.
    Sun G; Hu H; Su Y; Liu Q; Lu X
    Multimed Tools Appl; 2023; 82(5):7443-7461. PubMed ID: 36035322
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary.
    Kotyan S; Vargas DV
    PLoS One; 2022; 17(4):e0265723. PubMed ID: 35421125
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Defense against adversarial attacks based on color space transformation.
    Wang H; Wu C; Zheng K
    Neural Netw; 2024 May; 173():106176. PubMed ID: 38402810
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Generalizable and Discriminative Representations for Adversarially Robust Few-Shot Learning.
    Dong J; Wang Y; Xie X; Lai J; Ong YS
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; PP():. PubMed ID: 38536695
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.
    Niu Z; Sun Y; Miao Q; Jin R; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):7589-7605. PubMed ID: 38652616
    [TBL] [Abstract][Full Text] [Related]  

  • 13. On the robustness of skeleton detection against adversarial attacks.
    Bai X; Yang M; Liu Z
    Neural Netw; 2020 Dec; 132():416-427. PubMed ID: 33022470
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Generalizable Black-Box Adversarial Attack With Meta Learning.
    Yin F; Zhang Y; Wu B; Feng Y; Zhang J; Fan Y; Yang Y
    IEEE Trans Pattern Anal Mach Intell; 2024 Mar; 46(3):1804-1818. PubMed ID: 37021863
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Gradients Cannot Be Tamed: Behind the Impossible Paradox of Blocking Targeted Adversarial Attacks.
    Katzir Z; Elovici Y
    IEEE Trans Neural Netw Learn Syst; 2021 Jan; 32(1):128-138. PubMed ID: 32167916
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Improving Adversarial Robustness Against Universal Patch Attacks Through Feature Norm Suppressing.
    Yu C; Chen J; Wang Y; Xue Y; Ma H
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37917525
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
    Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
    Front Big Data; 2022; 5():822783. PubMed ID: 35592793
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Diffusion Models for Imperceptible and Transferable Adversarial Attack.
    Chen J; Chen H; Chen K; Zhang Y; Zou Z; Shi Z
    IEEE Trans Pattern Anal Mach Intell; 2024 Oct; PP():. PubMed ID: 39405140
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.
    Khan S; Chen JC; Liao WH; Chen CS
    Sensors (Basel); 2023 Jul; 23(13):. PubMed ID: 37448021
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.