BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

229 related articles for article (PubMed ID: 38091756)

  • 1. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial Attack on Skeleton-Based Human Action Recognition.
    Liu J; Akhtar N; Mian A
    IEEE Trans Neural Netw Learn Syst; 2022 Apr; 33(4):1609-1622. PubMed ID: 33351768
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Crafting Adversarial Perturbations via Transformed Image Component Swapping.
    Agarwal A; Ratha N; Vatsa M; Singh R
    IEEE Trans Image Process; 2022; 31():7338-7349. PubMed ID: 36094979
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.
    Deng Y; Karam LJ
    IEEE Trans Image Process; 2022; 31():5856-5868. PubMed ID: 36054395
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Sparse Adversarial Video Attacks via Superpixel-Based Jacobian Computation.
    Du Z; Liu F; Yan X
    Sensors (Basel); 2022 May; 22(10):. PubMed ID: 35632095
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Defending Against Multiple and Unforeseen Adversarial Videos.
    Lo SY; Patel VM
    IEEE Trans Image Process; 2022; 31():962-973. PubMed ID: 34965207
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Temporal shuffling for defending deep action recognition models against adversarial attacks.
    Hwang J; Zhang H; Choi JH; Hsieh CJ; Lee JS
    Neural Netw; 2024 Jan; 169():388-397. PubMed ID: 37925766
    [TBL] [Abstract][Full Text] [Related]  

  • 8. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Adaptive Cross-Modal Transferable Adversarial Attacks From Images to Videos.
    Wei Z; Chen J; Wu Z; Jiang YG
    IEEE Trans Pattern Anal Mach Intell; 2024 May; 46(5):3772-3783. PubMed ID: 38153825
    [TBL] [Abstract][Full Text] [Related]  

  • 11. DEFEAT: Decoupled feature attack across deep neural networks.
    Huang L; Gao C; Liu N
    Neural Netw; 2022 Dec; 156():13-28. PubMed ID: 36228335
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Attack to Fool and Explain Deep Networks.
    Akhtar N; Jalwana MAAK; Bennamoun M; Mian A
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):5980-5995. PubMed ID: 34038356
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos.
    Wei X; Wang S; Yan H
    IEEE Trans Pattern Anal Mach Intell; 2023 Sep; 45(9):10898-10912. PubMed ID: 37030872
    [TBL] [Abstract][Full Text] [Related]  

  • 16. An adversarial example attack method based on predicted bounding box adaptive deformation in optical remote sensing images.
    Dai L; Wang J; Yang B; Chen F; Zhang H
    PeerJ Comput Sci; 2024; 10():e2053. PubMed ID: 38855243
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations.
    Allyn J; Allou N; Vidal C; Renou A; Ferdynus C
    Medicine (Baltimore); 2020 Dec; 99(50):e23568. PubMed ID: 33327315
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Frequency constraint-based adversarial attack on deep neural networks for medical image classification.
    Chen F; Wang J; Liu H; Kong W; Zhao Z; Ma L; Liao H; Zhang D
    Comput Biol Med; 2023 Sep; 164():107248. PubMed ID: 37515875
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.