BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

570 related articles for article (PubMed ID: 31260390)

  • 1. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Detection of Backdoors in Trained Classifiers Without Access to the Training Set.
    Xiang Z; Miller DJ; Kesidis G
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1177-1191. PubMed ID: 33326384
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Development of an IoT Architecture Based on a Deep Neural Network against Cyber Attacks for Automated Guided Vehicles.
    Elsisi M; Tran MQ
    Sensors (Basel); 2021 Dec; 21(24):. PubMed ID: 34960561
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

  • 6. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation.
    Taran O; Rezaeifar S; Holotyak T; Voloshynovskiy S
    EURASIP J Inf Secur; 2020; 2020(1):10. PubMed ID: 32685910
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.
    Anastasiou T; Karagiorgou S; Petrou P; Papamartzivanos D; Giannetsos T; Tsirigotaki G; Keizer J
    Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146258
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adversarial attacks against supervised machine learning based network intrusion detection systems.
    Alshahrani E; Alghazzawi D; Alotaibi R; Rabie O
    PLoS One; 2022; 17(10):e0275971. PubMed ID: 36240162
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.
    Deng Y; Karam LJ
    IEEE Trans Image Process; 2022; 31():5856-5868. PubMed ID: 36054395
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks.
    Hwang RH; Lin JY; Hsieh SY; Lin HY; Lin CL
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679651
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 13. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 14. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.
    Tavakoli M; Agostinelli F; Baldi P
    Neural Netw; 2021 Aug; 140():1-12. PubMed ID: 33743319
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.
    Hirano H; Koga K; Takemoto K
    PLoS One; 2020; 15(12):e0243963. PubMed ID: 33332412
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Adversarial Feature Selection Against Evasion Attacks.
    Zhang F; Chan PP; Biggio B; Yeung DS; Roli F
    IEEE Trans Cybern; 2016 Mar; 46(3):766-77. PubMed ID: 25910268
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Adversarial example defense based on image reconstruction.
    Zhang YA; Xu H; Pei C; Yang G
    PeerJ Comput Sci; 2021; 7():e811. PubMed ID: 35036533
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Randomized Prediction Games for Adversarial Machine Learning.
    Rota Bulo S; Biggio B; Pillai I; Pelillo M; Roli F
    IEEE Trans Neural Netw Learn Syst; 2017 Nov; 28(11):2466-2478. PubMed ID: 27514067
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks.
    Liu Q; Wen W
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):3-14. PubMed ID: 34181553
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 29.