BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

120 related articles for article (PubMed ID: 37086543)

  • 1. Differential evolution based dual adversarial camouflage: Fooling human eyes and object detectors.
    Sun J; Yao W; Jiang T; Wang D; Chen X
    Neural Netw; 2023 Jun; 163():256-271. PubMed ID: 37086543
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches.
    Kim Y; Kang H; Suryanto N; Larasati HT; Mukaroh A; Kim H
    Sensors (Basel); 2021 Aug; 21(16):. PubMed ID: 34450763
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Blinding and blurring the multi-object tracker with adversarial perturbations.
    Pang H; Ma R; Su J; Liu C; Gao Y; Jin Q
    Neural Netw; 2024 Aug; 176():106331. PubMed ID: 38701599
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Attack to Fool and Explain Deep Networks.
    Akhtar N; Jalwana MAAK; Bennamoun M; Mian A
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):5980-5995. PubMed ID: 34038356
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Recognizing Object by Components With Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks.
    Li X; Wang Z; Zhang B; Sun F; Hu X
    IEEE Trans Pattern Anal Mach Intell; 2023 Jul; 45(7):8861-8873. PubMed ID: 37021866
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Centered Multi-Task Generative Adversarial Network for Small Object Detection.
    Wang H; Wang J; Bai K; Sun Y
    Sensors (Basel); 2021 Jul; 21(15):. PubMed ID: 34372431
    [TBL] [Abstract][Full Text] [Related]  

  • 7. An adversarial example attack method based on predicted bounding box adaptive deformation in optical remote sensing images.
    Dai L; Wang J; Yang B; Chen F; Zhang H
    PeerJ Comput Sci; 2024; 10():e2053. PubMed ID: 38855243
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Fooling Examples: Another Intriguing Property of Neural Networks.
    Zhang M; Chen Y; Qian C
    Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514672
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.
    Wang H; Li G; Liu X; Lin L
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):1725-1737. PubMed ID: 33074803
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Defending Person Detection Against Adversarial Patch Attack by Using Universal Defensive Frame.
    Yu Y; Lee HJ; Lee H; Ro YM
    IEEE Trans Image Process; 2022; 31():6976-6990. PubMed ID: 36318546
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Attention distraction with gradient sharpening for multi-task adversarial attack.
    Liu B; Hu J; Deng W
    Math Biosci Eng; 2023 Jun; 20(8):13562-13580. PubMed ID: 37679102
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Unified Adversarial Patch for Visible-Infrared Cross-Modal Attacks in the Physical World.
    Wei X; Huang Y; Sun Y; Yu J
    IEEE Trans Pattern Anal Mach Intell; 2024 Apr; 46(4):2348-2363. PubMed ID: 37930911
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.
    Deng Y; Karam LJ
    IEEE Trans Image Process; 2022; 31():5856-5868. PubMed ID: 36054395
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Deep learning models for electrocardiograms are susceptible to adversarial attack.
    Han X; Hu Y; Foschini L; Chinitz L; Jankelson L; Ranganath R
    Nat Med; 2020 Mar; 26(3):360-363. PubMed ID: 32152582
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Adversarial parameter defense by multi-step risk minimization.
    Zhang Z; Luo R; Ren X; Su Q; Li L; Sun X
    Neural Netw; 2021 Dec; 144():154-163. PubMed ID: 34500254
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization.
    Suryanto N; Kang H; Kim Y; Yun Y; Larasati HT; Kim H
    Sensors (Basel); 2020 Dec; 20(24):. PubMed ID: 33327453
    [TBL] [Abstract][Full Text] [Related]  

  • 18. GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.
    Chen Y; Yang H; Wang X; Wang Q; Zhou H
    Entropy (Basel); 2023 Mar; 25(3):. PubMed ID: 36981349
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 20. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.