BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

335 related articles for article (PubMed ID: 32036227)

  • 1. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 3. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach.
    Alkhowaiter M; Kholidy H; Alyami MA; Alghamdi A; Zou C
    Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514582
    [TBL] [Abstract][Full Text] [Related]  

  • 11. ApaNet: adversarial perturbations alleviation network for face verification.
    Sun G; Hu H; Su Y; Liu Q; Lu X
    Multimed Tools Appl; 2023; 82(5):7443-7461. PubMed ID: 36035322
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation.
    Taran O; Rezaeifar S; Holotyak T; Voloshynovskiy S
    EURASIP J Inf Secur; 2020; 2020(1):10. PubMed ID: 32685910
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.
    Anastasiou T; Karagiorgou S; Petrou P; Papamartzivanos D; Giannetsos T; Tsirigotaki G; Keizer J
    Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146258
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adversarial attacks and defenses using feature-space stochasticity.
    Ukita J; Ohki K
    Neural Netw; 2023 Oct; 167():875-889. PubMed ID: 37722983
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition.
    Lal S; Rehman SU; Shah JH; Meraj T; Rauf HT; Damaševičius R; Mohammed MA; Abdulkareem KH
    Sensors (Basel); 2021 Jun; 21(11):. PubMed ID: 34200216
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Image Super-Resolution as a Defense Against Adversarial Attacks.
    Mustafa A; Khan SH; Hayat M; Shen J; Shao L
    IEEE Trans Image Process; 2019 Sep; ():. PubMed ID: 31545722
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.
    Jin R; Li X
    Med Image Anal; 2023 Dec; 90():102965. PubMed ID: 37804585
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 17.