These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 38652616)

  • 21. IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions.
    Xu Y; Liu X; Ding K; Xin B
    Sensors (Basel); 2022 Nov; 22(22):. PubMed ID: 36433292
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.
    Khan S; Chen JC; Liao WH; Chen CS
    Sensors (Basel); 2023 Jul; 23(13):. PubMed ID: 37448021
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.
    Yu H; Liu A; Li G; Yang J; Zhang C
    IEEE Trans Image Process; 2021; 30():8955-8967. PubMed ID: 34699360
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system.
    Sheikh BUH; Zafar A
    Multimed Tools Appl; 2023 May; ():1-27. PubMed ID: 37362697
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Auto encoder-based defense mechanism against popular adversarial attacks in deep learning.
    Ashraf SN; Siddiqi R; Farooq H
    PLoS One; 2024; 19(10):e0307363. PubMed ID: 39432550
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Unambiguous and High-Fidelity Backdoor Watermarking for Deep Neural Networks.
    Hua G; Teoh ABJ; Xiang Y; Jiang H
    IEEE Trans Neural Netw Learn Syst; 2024 Aug; 35(8):11204-11217. PubMed ID: 37028031
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Multidomain active defense: Detecting multidomain backdoor poisoned samples via ALL-to-ALL decoupling training without clean datasets.
    Ma B; Wang J; Wang D; Meng B
    Neural Netw; 2023 Nov; 168():350-362. PubMed ID: 37797397
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Backdoor Attack against Face Sketch Synthesis.
    Zhang S; Ye S
    Entropy (Basel); 2023 Jun; 25(7):. PubMed ID: 37509921
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Critical Path-Based Backdoor Detection for Deep Neural Networks.
    Jiang W; Wen X; Zhan J; Wang X; Song Z; Bian C
    IEEE Trans Neural Netw Learn Syst; 2024 Mar; 35(3):4032-4046. PubMed ID: 36074883
    [TBL] [Abstract][Full Text] [Related]  

  • 31. A regularization method to improve adversarial robustness of neural networks for ECG signal classification.
    Ma L; Liang L
    Comput Biol Med; 2022 May; 144():105345. PubMed ID: 35240379
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters.
    Wei X; Zhao S; Li B
    IEEE Trans Pattern Anal Mach Intell; 2024 Dec; 46(12):8870-8882. PubMed ID: 38848237
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Image Super-Resolution as a Defense Against Adversarial Attacks.
    Mustafa A; Khan SH; Hayat M; Shen J; Shao L
    IEEE Trans Image Process; 2019 Sep; ():. PubMed ID: 31545722
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Boosting adversarial robustness via self-paced adversarial training.
    He L; Ai Q; Yang X; Ren Y; Wang Q; Xu Z
    Neural Netw; 2023 Oct; 167():706-714. PubMed ID: 37729786
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
    Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
    IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Implicit adversarial data augmentation and robustness with Noise-based Learning.
    Panda P; Roy K
    Neural Netw; 2021 Sep; 141():120-132. PubMed ID: 33894652
    [TBL] [Abstract][Full Text] [Related]  

  • 39. A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.
    Tao Q; Liao J; Zhang E; Li L
    Neural Netw; 2024 Jul; 175():106276. PubMed ID: 38599138
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Explanatory subgraph attacks against Graph Neural Networks.
    Wang H; Liu T; Sheng Z; Li H
    Neural Netw; 2024 Apr; 172():106097. PubMed ID: 38286098
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.