BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

332 related articles for article (PubMed ID: 35156010)

  • 1. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.
    Wang D; Jin W; Wu Y
    Sensors (Basel); 2023 Mar; 23(6):. PubMed ID: 36991962
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
    Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
    Front Big Data; 2022; 5():822783. PubMed ID: 35592793
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
    Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
    IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Towards improving fast adversarial training in multi-exit network.
    Chen S; Shen H; Wang R; Wang X
    Neural Netw; 2022 Jun; 150():1-11. PubMed ID: 35279625
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Improving adversarial robustness of medical imaging systems via adding global attention noise.
    Dai Y; Qian Y; Lu F; Wang B; Gu Z; Wang W; Wan J; Zhang Y
    Comput Biol Med; 2023 Sep; 164():107251. PubMed ID: 37480679
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Attention distraction with gradient sharpening for multi-task adversarial attack.
    Liu B; Hu J; Deng W
    Math Biosci Eng; 2023 Jun; 20(8):13562-13580. PubMed ID: 37679102
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Mitigating Accuracy-Robustness Trade-Off Via Balanced Multi-Teacher Adversarial Distillation.
    Zhao S; Wang X; Wei X
    IEEE Trans Pattern Anal Mach Intell; 2024 Jun; PP():. PubMed ID: 38889035
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial Attack Against Deep Saliency Models Powered by Non-Redundant Priors.
    Che Z; Borji A; Zhai G; Ling S; Li J; Tian Y; Guo G; Le Callet P
    IEEE Trans Image Process; 2021; 30():1973-1988. PubMed ID: 33444138
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Defense against adversarial attacks based on color space transformation.
    Wang H; Wu C; Zheng K
    Neural Netw; 2024 May; 173():106176. PubMed ID: 38402810
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Feb; PP():. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.
    Xie P; Shi S; Yang S; Qiao K; Liang N; Wang L; Chen J; Hu G; Yan B
    Front Neurorobot; 2021; 15():784053. PubMed ID: 34955802
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Approaching Adversarial Example Classification with Chaos Theory.
    Pedraza A; Deniz O; Bueno G
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286969
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Perturbation diversity certificates robust generalization.
    Qian Z; Zhang S; Huang K; Wang Q; Yi X; Gu B; Xiong H
    Neural Netw; 2024 Apr; 172():106117. PubMed ID: 38232423
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks.
    Ma L; Liang L
    Comput Methods Programs Biomed; 2023 Oct; 240():107687. PubMed ID: 37392695
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 17.