These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
113 related articles for article (PubMed ID: 38652616)
1. Towards Unified Robustness Against Both Backdoor and Adversarial Attacks. Niu Z; Sun Y; Miao Q; Jin R; Hua G IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):7589-7605. PubMed ID: 38652616 [TBL] [Abstract][Full Text] [Related]
2. Backdoor attack and defense in federated generative adversarial network-based medical image synthesis. Jin R; Li X Med Image Anal; 2023 Dec; 90():102965. PubMed ID: 37804585 [TBL] [Abstract][Full Text] [Related]
3. Detection of Backdoors in Trained Classifiers Without Access to the Training Set. Xiang Z; Miller DJ; Kesidis G IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1177-1191. PubMed ID: 33326384 [TBL] [Abstract][Full Text] [Related]
4. Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Wang H; Hong J; Zhang A; Zhou J; Wang Z Adv Neural Inf Process Syst; 2022 Dec; 35():36026-36039. PubMed ID: 37081923 [TBL] [Abstract][Full Text] [Related]
5. Backdoor Learning: A Survey. Li Y; Jiang Y; Li Z; Xia ST IEEE Trans Neural Netw Learn Syst; 2024 Jan; 35(1):5-22. PubMed ID: 35731760 [TBL] [Abstract][Full Text] [Related]
7. A Textual Backdoor Defense Method Based on Deep Feature Classification. Shao K; Yang J; Hu P; Li X Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832587 [TBL] [Abstract][Full Text] [Related]
8. SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning. Li P; Huang J; Wu H; Zhang Z; Qi C Neural Netw; 2024 Jun; 174():106199. PubMed ID: 38452664 [TBL] [Abstract][Full Text] [Related]
9. On the Effectiveness of Adversarial Training Against Backdoor Attacks. Gao Y; Wu D; Zhang J; Gan G; Xia ST; Niu G; Sugiyama M IEEE Trans Neural Netw Learn Syst; 2024 Oct; 35(10):14878-14888. PubMed ID: 37314915 [TBL] [Abstract][Full Text] [Related]
10. Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set. Xiang Z; Miller DJ; Wang H; Kesidis G Neural Comput; 2021 Apr; 33(5):1329-1371. PubMed ID: 33617746 [TBL] [Abstract][Full Text] [Related]
11. Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface. Oyama T; Okura S; Yoshida K; Fujino T Sensors (Basel); 2023 May; 23(10):. PubMed ID: 37430657 [TBL] [Abstract][Full Text] [Related]
12. Mitigating Accuracy-Robustness Trade-Off via Balanced Multi-Teacher Adversarial Distillation. Zhao S; Wang X; Wei X IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):9338-9352. PubMed ID: 38889035 [TBL] [Abstract][Full Text] [Related]
13. Exploring Robust Features for Improving Adversarial Robustness. Wang H; Deng Y; Yoo S; Lin Y IEEE Trans Cybern; 2024 Sep; 54(9):5141-5151. PubMed ID: 38593009 [TBL] [Abstract][Full Text] [Related]
14. Towards evaluating the robustness of deep diagnostic models by adversarial attack. Xu M; Zhang T; Li Z; Liu M; Zhang D Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005 [TBL] [Abstract][Full Text] [Related]
15. Learning defense transformations for counterattacking adversarial examples. Li J; Zhang S; Cao J; Tan M Neural Netw; 2023 Jul; 164():177-185. PubMed ID: 37149918 [TBL] [Abstract][Full Text] [Related]
16. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images. Haque SBU; Zafar A J Imaging Inform Med; 2024 Feb; 37(1):308-338. PubMed ID: 38343214 [TBL] [Abstract][Full Text] [Related]
17. Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification. Wang D; Jin W; Wu Y Sensors (Basel); 2023 Mar; 23(6):. PubMed ID: 36991962 [TBL] [Abstract][Full Text] [Related]
18. Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks. Zhang L; Zhou Y; Yang Y; Gao X IEEE Trans Pattern Anal Mach Intell; 2024 Oct; 46(10):6669-6687. PubMed ID: 38587963 [TBL] [Abstract][Full Text] [Related]
19. EEG-Based Brain-Computer Interfaces are Vulnerable to Backdoor Attacks. Meng L; Jiang X; Huang J; Zeng Z; Yu S; Jung TP; Lin CT; Chavarriaga R; Wu D IEEE Trans Neural Syst Rehabil Eng; 2023; 31():2224-2234. PubMed ID: 37145943 [TBL] [Abstract][Full Text] [Related]
20. Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging. Kanca E; Ayas S; Baykal Kablan E; Ekinci M Med Biol Eng Comput; 2024 Oct; ():. PubMed ID: 39453557 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]