BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

303 related articles for article (PubMed ID: 34748482)

  • 1. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial example defense based on image reconstruction.
    Zhang YA; Xu H; Pei C; Yang G
    PeerJ Comput Sci; 2021; 7():e811. PubMed ID: 35036533
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.
    Mahmood K; Gurevin D; van Dijk M; Nguyen PH
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682083
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Feb; PP():. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Sinkhorn Adversarial Attack and Defense.
    Subramanyam AV
    IEEE Trans Image Process; 2022; 31():4039-4049. PubMed ID: 35679377
    [TBL] [Abstract][Full Text] [Related]  

  • 8. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Image Super-Resolution as a Defense Against Adversarial Attacks.
    Mustafa A; Khan SH; Hayat M; Shen J; Shao L
    IEEE Trans Image Process; 2019 Sep; ():. PubMed ID: 31545722
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.
    Wang D; Jin W; Wu Y
    Sensors (Basel); 2023 Mar; 23(6):. PubMed ID: 36991962
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks.
    Zhang L; Zhou Y; Yang Y; Gao X
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; PP():. PubMed ID: 38587963
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Approaching Adversarial Example Classification with Chaos Theory.
    Pedraza A; Deniz O; Bueno G
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286969
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Defense against adversarial attacks based on color space transformation.
    Wang H; Wu C; Zheng K
    Neural Netw; 2024 May; 173():106176. PubMed ID: 38402810
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.
    Khan S; Chen JC; Liao WH; Chen CS
    Sensors (Basel); 2023 Jul; 23(13):. PubMed ID: 37448021
    [TBL] [Abstract][Full Text] [Related]  

  • 15. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

  • 17. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Towards Adversarial Robustness with Early Exit Ensembles.
    Qendro L; Mascolo C
    Annu Int Conf IEEE Eng Med Biol Soc; 2022 Jul; 2022():313-316. PubMed ID: 36086386
    [TBL] [Abstract][Full Text] [Related]  

  • 19. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.
    Tavakoli M; Agostinelli F; Baldi P
    Neural Netw; 2021 Aug; 140():1-12. PubMed ID: 33743319
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach.
    Kansal K; Krishna PS; Jain PB; R S; Honnavalli P; Eswaran S
    Heliyon; 2022 Oct; 8(10):e11209. PubMed ID: 36311356
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 16.