These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
107 related articles for article (PubMed ID: 34529561)
1. T-BFA: Targeted Bit-Flip Adversarial Weight Attack. Rakin AS; He Z; Li J; Yao F; Chakrabarti C; Fan D IEEE Trans Pattern Anal Mach Intell; 2021 Sep; PP():. PubMed ID: 34529561 [TBL] [Abstract][Full Text] [Related]
2. Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. Stutz D; Chandramoorthy N; Hein M; Schiele B IEEE Trans Pattern Anal Mach Intell; 2023 Mar; 45(3):3632-3647. PubMed ID: 37815955 [TBL] [Abstract][Full Text] [Related]
3. Universal adversarial attacks on deep neural networks for medical image classification. Hirano H; Minagi A; Takemoto K BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181 [TBL] [Abstract][Full Text] [Related]
4. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time. Miller D; Wang Y; Kesidis G Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390 [TBL] [Abstract][Full Text] [Related]
5. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks. Hirano H; Koga K; Takemoto K PLoS One; 2020; 15(12):e0243963. PubMed ID: 33332412 [TBL] [Abstract][Full Text] [Related]
6. Versatile Weight Attack via Flipping Limited Bits. Bai J; Wu B; Li Z; Xia ST IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13653-13665. PubMed ID: 37463082 [TBL] [Abstract][Full Text] [Related]
7. Perturbing BEAMs: EEG adversarial attack to deep learning models for epilepsy diagnosing. Yu J; Qiu K; Wang P; Su C; Fan Y; Cao Y BMC Med Inform Decis Mak; 2023 Jul; 23(1):115. PubMed ID: 37415186 [TBL] [Abstract][Full Text] [Related]
8. Boosting the transferability of adversarial examples via stochastic serial attack. Hao L; Hao K; Wei B; Tang XS Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532 [TBL] [Abstract][Full Text] [Related]
9. Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet. Chen S; He Z; Sun C; Yang J; Huang X IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):2188-2197. PubMed ID: 33095710 [TBL] [Abstract][Full Text] [Related]
10. Compression Helps Deep Learning in Image Classification. Yang EH; Amer H; Jiang Y Entropy (Basel); 2021 Jul; 23(7):. PubMed ID: 34356422 [TBL] [Abstract][Full Text] [Related]
11. ApaNet: adversarial perturbations alleviation network for face verification. Sun G; Hu H; Su Y; Liu Q; Lu X Multimed Tools Appl; 2023; 82(5):7443-7461. PubMed ID: 36035322 [TBL] [Abstract][Full Text] [Related]
12. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers. Cao H; Si C; Sun Q; Liu Y; Li S; Gope P Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923 [TBL] [Abstract][Full Text] [Related]
13. New Adversarial Image Detection Based on Sentiment Analysis. Wang Y; Li T; Li S; Yuan X; Ni W IEEE Trans Neural Netw Learn Syst; 2024 Oct; 35(10):14060-14074. PubMed ID: 37204956 [TBL] [Abstract][Full Text] [Related]
14. Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning. Minagi A; Hirano H; Takemoto K J Imaging; 2022 Feb; 8(2):. PubMed ID: 35200740 [TBL] [Abstract][Full Text] [Related]
15. Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading. Cheng Y; Guo Q; Juefei-Xu F; Fu H; Lin SW; Lin W IEEE J Biomed Health Inform; 2024 Sep; PP():. PubMed ID: 39331557 [TBL] [Abstract][Full Text] [Related]
16. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors. Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850 [TBL] [Abstract][Full Text] [Related]
18. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA). Maliamanis TV; Apostolidis KD; Papakostas GA Biomedicines; 2022 Oct; 10(10):. PubMed ID: 36289807 [TBL] [Abstract][Full Text] [Related]
19. Crafting Adversarial Perturbations via Transformed Image Component Swapping. Agarwal A; Ratha N; Vatsa M; Singh R IEEE Trans Image Process; 2022; 31():7338-7349. PubMed ID: 36094979 [TBL] [Abstract][Full Text] [Related]
20. Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface. Oyama T; Okura S; Yoshida K; Fujino T Sensors (Basel); 2023 May; 23(10):. PubMed ID: 37430657 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]