These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

162 related articles for article (PubMed ID: 31985427)

  • 1. Analyzing the Noise Robustness of Deep Neural Networks.
    Cao K; Liu M; Su H; Wu J; Zhu J; Liu S
    IEEE Trans Vis Comput Graph; 2021 Jul; 27(7):3289-3304. PubMed ID: 31985427
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
    Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
    IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Attention distraction with gradient sharpening for multi-task adversarial attack.
    Liu B; Hu J; Deng W
    Math Biosci Eng; 2023 Jun; 20(8):13562-13580. PubMed ID: 37679102
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach.
    Yu H; Liu A; Li G; Yang J; Zhang C
    IEEE Trans Image Process; 2021; 30():8955-8967. PubMed ID: 34699360
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial Examples: Attacks and Defenses for Deep Learning.
    Yuan X; He P; Zhu Q; Li X
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2805-2824. PubMed ID: 30640631
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial parameter defense by multi-step risk minimization.
    Zhang Z; Luo R; Ren X; Su Q; Li L; Sun X
    Neural Netw; 2021 Dec; 144():154-163. PubMed ID: 34500254
    [TBL] [Abstract][Full Text] [Related]  

  • 8. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Towards improving fast adversarial training in multi-exit network.
    Chen S; Shen H; Wang R; Wang X
    Neural Netw; 2022 Jun; 150():1-11. PubMed ID: 35279625
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Feature Distillation in Deep Attention Network Against Adversarial Examples.
    Chen X; Weng J; Deng X; Luo W; Lan Y; Tian Q
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3691-3705. PubMed ID: 34739380
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adversarial attacks and defenses using feature-space stochasticity.
    Ukita J; Ohki K
    Neural Netw; 2023 Oct; 167():875-889. PubMed ID: 37722983
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Training Robust Deep Neural Networks via Adversarial Noise Propagation.
    Liu A; Liu X; Yu H; Zhang C; Liu Q; Tao D
    IEEE Trans Image Process; 2021; 30():5769-5781. PubMed ID: 34161231
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Learning defense transformations for counterattacking adversarial examples.
    Li J; Zhang S; Cao J; Tan M
    Neural Netw; 2023 Jul; 164():177-185. PubMed ID: 37149918
    [TBL] [Abstract][Full Text] [Related]  

  • 14. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A regularization method to improve adversarial robustness of neural networks for ECG signal classification.
    Ma L; Liang L
    Comput Biol Med; 2022 May; 144():105345. PubMed ID: 35240379
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias.
    Wang J; Liu A; Bai X; Liu X
    IEEE Trans Image Process; 2022; 31():598-611. PubMed ID: 34851825
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.
    Wang H; Li G; Liu X; Lin L
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):1725-1737. PubMed ID: 33074803
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Toward Intrinsic Adversarial Robustness Through Probabilistic Training.
    Dong J; Yang L; Wang Y; Xie X; Lai J
    IEEE Trans Image Process; 2023; 32():3862-3872. PubMed ID: 37428673
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.