268 related articles for article (PubMed ID: 33022470)
1. On the robustness of skeleton detection against adversarial attacks.
Bai X; Yang M; Liu Z
Neural Netw; 2020 Dec; 132():416-427. PubMed ID: 33022470
[TBL] [Abstract][Full Text] [Related]
2. Adversarial Attack on Skeleton-Based Human Action Recognition.
Liu J; Akhtar N; Mian A
IEEE Trans Neural Netw Learn Syst; 2022 Apr; 33(4):1609-1622. PubMed ID: 33351768
[TBL] [Abstract][Full Text] [Related]
3. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
Oregi I; Del Ser J; Pérez A; Lozano JA
Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
[TBL] [Abstract][Full Text] [Related]
4. ROSA: Robust Salient Object Detection Against Adversarial Attacks.
Li H; Li G; Yu Y
IEEE Trans Cybern; 2020 Nov; 50(11):4835-4847. PubMed ID: 31107676
[TBL] [Abstract][Full Text] [Related]
5. Uni-image: Universal image construction for robust neural model.
Ho J; Lee BG; Kang DK
Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
[TBL] [Abstract][Full Text] [Related]
6. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
Mu R; Marcolino L; Ni Q; Ruan W
Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
[TBL] [Abstract][Full Text] [Related]
7. Hierarchical binding in convolutional neural networks: Making adversarial attacks geometrically challenging.
Leadholm N; Stringer S
Neural Netw; 2022 Nov; 155():258-286. PubMed ID: 36081198
[TBL] [Abstract][Full Text] [Related]
8. On the role of deep learning model complexity in adversarial robustness for medical images.
Rodriguez D; Nayak T; Chen Y; Krishnan R; Huang Y
BMC Med Inform Decis Mak; 2022 Jun; 22(Suppl 2):160. PubMed ID: 35725429
[TBL] [Abstract][Full Text] [Related]
9. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
Front Big Data; 2022; 5():822783. PubMed ID: 35592793
[TBL] [Abstract][Full Text] [Related]
10. Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving.
Mahima KTY; Perera A; Anavatti S; Garratt M
Sensors (Basel); 2023 Dec; 23(23):. PubMed ID: 38067951
[TBL] [Abstract][Full Text] [Related]
11. Adversarial Attack and Defense in Deep Ranking.
Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
[TBL] [Abstract][Full Text] [Related]
12. GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.
Chen Y; Yang H; Wang X; Wang Q; Zhou H
Entropy (Basel); 2023 Mar; 25(3):. PubMed ID: 36981349
[TBL] [Abstract][Full Text] [Related]
13. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
Xu M; Zhang T; Li Z; Liu M; Zhang D
Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
[TBL] [Abstract][Full Text] [Related]
14. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.
Tavakoli M; Agostinelli F; Baldi P
Neural Netw; 2021 Aug; 140():1-12. PubMed ID: 33743319
[TBL] [Abstract][Full Text] [Related]
15. Robustifying models against adversarial attacks by Langevin dynamics.
Srinivasan V; Rohrer C; Marban A; Müller KR; Samek W; Nakajima S
Neural Netw; 2021 May; 137():1-17. PubMed ID: 33515855
[TBL] [Abstract][Full Text] [Related]
16. Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification.
Xu Y; Du B; Zhang L
IEEE Trans Image Process; 2021; 30():8671-8685. PubMed ID: 34648444
[TBL] [Abstract][Full Text] [Related]
17. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
[TBL] [Abstract][Full Text] [Related]
18. Detecting the universal adversarial perturbations on high-density sEMG signals.
Xue B; Wu L; Liu A; Zhang X; Chen X; Chen X
Comput Biol Med; 2022 Oct; 149():105978. PubMed ID: 36037630
[TBL] [Abstract][Full Text] [Related]
19. Vulnerability of classifiers to evolutionary generated adversarial examples.
Vidnerová P; Neruda R
Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
[TBL] [Abstract][Full Text] [Related]
20. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
Miller D; Wang Y; Kesidis G
Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]