BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

232 related articles for article (PubMed ID: 36228335)

  • 1. DEFEAT: Decoupled feature attack across deep neural networks.
    Huang L; Gao C; Liu N
    Neural Netw; 2022 Dec; 156():13-28. PubMed ID: 36228335
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Erosion Attack: Harnessing Corruption To Improve Adversarial Examples.
    Huang L; Gao C; Liu N
    IEEE Trans Image Process; 2023; 32():4828-4841. PubMed ID: 37058378
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Remix: Towards the transferability of adversarial examples.
    Zhao H; Hao L; Hao K; Wei B; Cai X
    Neural Netw; 2023 Jun; 163():367-378. PubMed ID: 37119676
    [TBL] [Abstract][Full Text] [Related]  

  • 4. SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories.
    Che Z; Borji A; Zhai G; Ling S; Li J; Min X; Guo G; Le Callet P
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1051-1065. PubMed ID: 33296311
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Crafting Adversarial Perturbations via Transformed Image Component Swapping.
    Agarwal A; Ratha N; Vatsa M; Singh R
    IEEE Trans Image Process; 2022; 31():7338-7349. PubMed ID: 36094979
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.
    Li H; Yu W; Huang H
    Neural Netw; 2023 Aug; 165():925-937. PubMed ID: 37441909
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Toward Intrinsic Adversarial Robustness Through Probabilistic Training.
    Dong J; Yang L; Wang Y; Xie X; Lai J
    IEEE Trans Image Process; 2023; 32():3862-3872. PubMed ID: 37428673
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Toward Understanding and Boosting Adversarial Transferability From a Distribution Perspective.
    Zhu Y; Chen Y; Li X; Chen K; He Y; Tian X; Zheng B; Chen Y; Huang Q
    IEEE Trans Image Process; 2022; 31():6487-6501. PubMed ID: 36223353
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.
    Chen S; He Z; Sun C; Yang J; Huang X
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):2188-2197. PubMed ID: 33095710
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval.
    Chen B; Feng Y; Dai T; Bai J; Jiang Y; Xia ST; Wang X
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1388-1404. PubMed ID: 35380957
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Generalizable Black-Box Adversarial Attack With Meta Learning.
    Yin F; Zhang Y; Wu B; Feng Y; Zhang J; Fan Y; Yang Y
    IEEE Trans Pattern Anal Mach Intell; 2024 Mar; 46(3):1804-1818. PubMed ID: 37021863
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.
    Mahmood K; Gurevin D; van Dijk M; Nguyen PH
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682083
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.
    Wang Z; Shu X; Wang Y; Feng Y; Zhang L; Yi Z
    IEEE Trans Cybern; 2023 Aug; 53(8):5323-5335. PubMed ID: 36240037
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 19. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Adversarially robust neural networks with feature uncertainty learning and label embedding.
    Wang R; Ke H; Hu M; Wu W
    Neural Netw; 2024 Apr; 172():106087. PubMed ID: 38160621
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.