These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
162 related articles for article (PubMed ID: 34682083)
1. Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples. Mahmood K; Gurevin D; van Dijk M; Nguyen PH Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682083 [TBL] [Abstract][Full Text] [Related]
2. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks. Theagarajan R; Bhanu B IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482 [TBL] [Abstract][Full Text] [Related]
3. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers. Cao H; Si C; Sun Q; Liu Y; Li S; Gope P Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923 [TBL] [Abstract][Full Text] [Related]
5. An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning. Chen Z; Ding J; Wu F; Zhang C; Sun Y; Sun J; Liu S; Ji Y Entropy (Basel); 2022 Sep; 24(10):. PubMed ID: 37420397 [TBL] [Abstract][Full Text] [Related]
6. DEFEAT: Decoupled feature attack across deep neural networks. Huang L; Gao C; Liu N Neural Netw; 2022 Dec; 156():13-28. PubMed ID: 36228335 [TBL] [Abstract][Full Text] [Related]
7. Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification. Wang D; Jin W; Wu Y Sensors (Basel); 2023 Mar; 23(6):. PubMed ID: 36991962 [TBL] [Abstract][Full Text] [Related]
9. Erosion Attack: Harnessing Corruption To Improve Adversarial Examples. Huang L; Gao C; Liu N IEEE Trans Image Process; 2023; 32():4828-4841. PubMed ID: 37058378 [TBL] [Abstract][Full Text] [Related]
10. Implicit adversarial data augmentation and robustness with Noise-based Learning. Panda P; Roy K Neural Netw; 2021 Sep; 141():120-132. PubMed ID: 33894652 [TBL] [Abstract][Full Text] [Related]
11. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness. Tavakoli M; Agostinelli F; Baldi P Neural Netw; 2021 Aug; 140():1-12. PubMed ID: 33743319 [TBL] [Abstract][Full Text] [Related]
12. Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks. Liu Q; Wen W IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):3-14. PubMed ID: 34181553 [TBL] [Abstract][Full Text] [Related]
13. LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent Features. Yu Y; Gao X; Xu CZ IEEE Trans Pattern Anal Mach Intell; 2024 Jan; 46(1):354-369. PubMed ID: 37831567 [TBL] [Abstract][Full Text] [Related]
14. SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories. Che Z; Borji A; Zhai G; Ling S; Li J; Min X; Guo G; Le Callet P IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1051-1065. PubMed ID: 33296311 [TBL] [Abstract][Full Text] [Related]
15. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors. Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850 [TBL] [Abstract][Full Text] [Related]
16. Adversarial Medical Image with Hierarchical Feature Hiding. Yao Q; He Z; Li Y; Lin Y; Ma K; Zheng Y; Kevin Zhou S IEEE Trans Med Imaging; 2023 Nov; PP():. PubMed ID: 37995172 [TBL] [Abstract][Full Text] [Related]