BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

305 related articles for article (PubMed ID: 32685910)

  • 1. Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation.
    Taran O; Rezaeifar S; Holotyak T; Voloshynovskiy S
    EURASIP J Inf Secur; 2020; 2020(1):10. PubMed ID: 32685910
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.
    Anastasiou T; Karagiorgou S; Petrou P; Papamartzivanos D; Giannetsos T; Tsirigotaki G; Keizer J
    Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146258
    [TBL] [Abstract][Full Text] [Related]  

  • 4. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Randomized Prediction Games for Adversarial Machine Learning.
    Rota Bulo S; Biggio B; Pillai I; Pelillo M; Roli F
    IEEE Trans Neural Netw Learn Syst; 2017 Nov; 28(11):2466-2478. PubMed ID: 27514067
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks.
    Hwang RH; Lin JY; Hsieh SY; Lin HY; Lin CL
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679651
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach.
    Kansal K; Krishna PS; Jain PB; R S; Honnavalli P; Eswaran S
    Heliyon; 2022 Oct; 8(10):e11209. PubMed ID: 36311356
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Model Compression Hardens Deep Neural Networks: A New Perspective to Prevent Adversarial Attacks.
    Liu Q; Wen W
    IEEE Trans Neural Netw Learn Syst; 2023 Jan; 34(1):3-14. PubMed ID: 34181553
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.
    Mahmood K; Gurevin D; van Dijk M; Nguyen PH
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682083
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors.
    Kurniawan A; Ohsita Y; Murata M
    Sensors (Basel); 2022 Nov; 22(22):. PubMed ID: 36433250
    [TBL] [Abstract][Full Text] [Related]  

  • 11. K-Anonymity inspired adversarial attack and multiple one-class classification defense.
    Mygdalis V; Tefas A; Pitas I
    Neural Netw; 2020 Apr; 124():296-307. PubMed ID: 32036227
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Improving Adversarial Robustness via Attention and Adversarial Logit Pairing.
    Li X; Goodman D; Liu J; Wei T; Dou D
    Front Artif Intell; 2021; 4():752831. PubMed ID: 35156010
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 15. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).
    Sheikh ZA; Singh Y; Singh PK; Gonçalves PJS
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420626
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Detection of Backdoors in Trained Classifiers Without Access to the Training Set.
    Xiang Z; Miller DJ; Kesidis G
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1177-1191. PubMed ID: 33326384
    [TBL] [Abstract][Full Text] [Related]  

  • 19. ApaNet: adversarial perturbations alleviation network for face verification.
    Sun G; Hu H; Su Y; Liu Q; Lu X
    Multimed Tools Appl; 2023; 82(5):7443-7461. PubMed ID: 36035322
    [TBL] [Abstract][Full Text] [Related]  

  • 20. RobEns: Robust Ensemble Adversarial Machine Learning Framework for Securing IoT Traffic.
    Alkadi S; Al-Ahmadi S; Ben Ismail MM
    Sensors (Basel); 2024 Apr; 24(8):. PubMed ID: 38676241
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 16.