These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

116 related articles for article (PubMed ID: 37015563)

  • 1. An Adaptive Black-Box Defense Against Trojan Attacks (TrojDef).
    Liu G; Khreishah A; Sharadgah F; Khalil I
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5367-5381. PubMed ID: 37015563
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Detection of Backdoors in Trained Classifiers Without Access to the Training Set.
    Xiang Z; Miller DJ; Kesidis G
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1177-1191. PubMed ID: 33326384
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Exploiting Missing Value Patterns for a Backdoor Attack on Machine Learning Models of Electronic Health Records: Development and Validation Study.
    Joe B; Park Y; Hamm J; Shin I; Lee J
    JMIR Med Inform; 2022 Aug; 10(8):e38440. PubMed ID: 35984701
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set.
    Xiang Z; Miller DJ; Wang H; Kesidis G
    Neural Comput; 2021 Apr; 33(5):1329-1371. PubMed ID: 33617746
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Designing Trojan Detectors in Neural Networks Using Interactive Simulations.
    Bajcsy P; Schaub NJ; Majurski M
    Appl Sci (Basel); 2021; 11(4):. PubMed ID: 34386268
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Multidomain active defense: Detecting multidomain backdoor poisoned samples via ALL-to-ALL decoupling training without clean datasets.
    Ma B; Wang J; Wang D; Meng B
    Neural Netw; 2023 Nov; 168():350-362. PubMed ID: 37797397
    [TBL] [Abstract][Full Text] [Related]  

  • 7. On the Effectiveness of Adversarial Training Against Backdoor Attacks.
    Gao Y; Wu D; Zhang J; Gan G; Xia ST; Niu G; Sugiyama M
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37314915
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork.
    Wang H; Hong J; Zhang A; Zhou J; Wang Z
    Adv Neural Inf Process Syst; 2022 Dec; 35():36026-36039. PubMed ID: 37081923
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Poison Ink: Robust and Invisible Backdoor Attack.
    Zhang J; Dongdong C; Huang Q; Liao J; Zhang W; Feng H; Hua G; Yu N
    IEEE Trans Image Process; 2022; 31():5691-5705. PubMed ID: 36040942
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Explanatory subgraph attacks against Graph Neural Networks.
    Wang H; Liu T; Sheng Z; Li H
    Neural Netw; 2024 Apr; 172():106097. PubMed ID: 38286098
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Towards Unified Robustness Against Both Backdoor and Adversarial Attacks.
    Niu Z; Sun Y; Miao Q; Jin R; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; PP():. PubMed ID: 38652616
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Deeply Supervised Discriminative Learning for Adversarial Defense.
    Mustafa A; Khan SH; Hayat M; Goecke R; Shen J; Shao L
    IEEE Trans Pattern Anal Mach Intell; 2021 Sep; 43(9):3154-3166. PubMed ID: 32149623
    [TBL] [Abstract][Full Text] [Related]  

  • 13. SecureNet: Proactive intellectual property protection and model security defense for DNNs based on backdoor learning.
    Li P; Huang J; Wu H; Zhang Z; Qi C
    Neural Netw; 2024 Jun; 174():106199. PubMed ID: 38452664
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface.
    Oyama T; Okura S; Yoshida K; Fujino T
    Sensors (Basel); 2023 May; 23(10):. PubMed ID: 37430657
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A Textual Backdoor Defense Method Based on Deep Feature Classification.
    Shao K; Yang J; Hu P; Li X
    Entropy (Basel); 2023 Jan; 25(2):. PubMed ID: 36832587
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Backdoor Attack against Face Sketch Synthesis.
    Zhang S; Ye S
    Entropy (Basel); 2023 Jun; 25(7):. PubMed ID: 37509921
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation.
    Taran O; Rezaeifar S; Holotyak T; Voloshynovskiy S
    EURASIP J Inf Secur; 2020; 2020(1):10. PubMed ID: 32685910
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.
    Jin R; Li X
    Med Image Anal; 2023 Dec; 90():102965. PubMed ID: 37804585
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Backdoor Learning: A Survey.
    Li Y; Jiang Y; Li Z; Xia ST
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; 35(1):5-22. PubMed ID: 35731760
    [TBL] [Abstract][Full Text] [Related]  

  • 20. How to backdoor split learning.
    Yu F; Wang L; Zeng B; Zhao K; Pang Z; Wu T
    Neural Netw; 2023 Nov; 168():326-336. PubMed ID: 37782993
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.