BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

124 related articles for article (PubMed ID: 37819820)

  • 1. Gradient Correction for White-Box Adversarial Attacks.
    Liu H; Ge Z; Zhou Z; Shang F; Liu Y; Jiao L
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; PP():. PubMed ID: 37819820
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing.
    Xie P; Shi S; Yang S; Qiao K; Liang N; Wang L; Chen J; Hu G; Yan B
    Front Neurorobot; 2021; 15():784053. PubMed ID: 34955802
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Robustifying Deep Networks for Medical Image Segmentation.
    Liu Z; Zhang J; Jog V; Loh PL; McMillan AB
    J Digit Imaging; 2021 Oct; 34(5):1279-1293. PubMed ID: 34545476
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.
    Li H; Yu W; Huang H
    Neural Netw; 2023 Aug; 165():925-937. PubMed ID: 37441909
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adversarial Attacks against Deep-Learning-Based Automatic Dependent Surveillance-Broadcast Unsupervised Anomaly Detection Models in the Context of Air Traffic Management.
    Luo P; Wang B; Tian J; Liu C; Yang Y
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894375
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 7. ApaNet: adversarial perturbations alleviation network for face verification.
    Sun G; Hu H; Su Y; Liu Q; Lu X
    Multimed Tools Appl; 2023; 82(5):7443-7461. PubMed ID: 36035322
    [TBL] [Abstract][Full Text] [Related]  

  • 8. An adversarial example attack method based on predicted bounding box adaptive deformation in optical remote sensing images.
    Dai L; Wang J; Yang B; Chen F; Zhang H
    PeerJ Comput Sci; 2024; 10():e2053. PubMed ID: 38855243
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Attention distraction with gradient sharpening for multi-task adversarial attack.
    Liu B; Hu J; Deng W
    Math Biosci Eng; 2023 Jun; 20(8):13562-13580. PubMed ID: 37679102
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Remix: Towards the transferability of adversarial examples.
    Zhao H; Hao L; Hao K; Wei B; Cai X
    Neural Netw; 2023 Jun; 163():367-378. PubMed ID: 37119676
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system.
    Sheikh BUH; Zafar A
    Multimed Tools Appl; 2023 May; ():1-27. PubMed ID: 37362697
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Sparse Adversarial Video Attacks via Superpixel-Based Jacobian Computation.
    Du Z; Liu F; Yan X
    Sensors (Basel); 2022 May; 22(10):. PubMed ID: 35632095
    [TBL] [Abstract][Full Text] [Related]  

  • 13. GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.
    Chen Y; Yang H; Wang X; Wang Q; Zhou H
    Entropy (Basel); 2023 Mar; 25(3):. PubMed ID: 36981349
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adaptive Perturbation for Adversarial Attack.
    Yuan Z; Zhang J; Jiang Z; Li L; Shan S
    IEEE Trans Pattern Anal Mach Intell; 2024 Feb; PP():. PubMed ID: 38376968
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Towards Transferable Adversarial Attacks on Image and Video Transformers.
    Wei Z; Chen J; Goldblum M; Wu Z; Goldstein T; Jiang YG; Davis LS
    IEEE Trans Image Process; 2023; 32():6346-6358. PubMed ID: 37966925
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach.
    Kansal K; Krishna PS; Jain PB; R S; Honnavalli P; Eswaran S
    Heliyon; 2022 Oct; 8(10):e11209. PubMed ID: 36311356
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks.
    Liu Q; Wen W
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):3-14. PubMed ID: 34181553
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adversarial Attack Against Deep Saliency Models Powered by Non-Redundant Priors.
    Che Z; Borji A; Zhai G; Ling S; Li J; Tian Y; Guo G; Le Callet P
    IEEE Trans Image Process; 2021; 30():1973-1988. PubMed ID: 33444138
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient.
    Liang L; Hu X; Deng L; Wu Y; Li G; Ding Y; Li P; Xie Y
    IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2569-2583. PubMed ID: 34473634
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Fast Adversarial Training With Adaptive Step Size.
    Huang Z; Fan Y; Liu C; Zhang W; Zhang Y; Salzmann M; Susstrunk S; Wang J
    IEEE Trans Image Process; 2023; 32():6102-6114. PubMed ID: 37883291
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.