BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

286 related articles for article (PubMed ID: 35200740)

  • 1. Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning.
    Minagi A; Hirano H; Takemoto K
    J Imaging; 2022 Feb; 8(2):. PubMed ID: 35200740
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.
    Hirano H; Koga K; Takemoto K
    PLoS One; 2020; 15(12):e0243963. PubMed ID: 33332412
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Enhanced covertness class discriminative universal adversarial perturbations.
    Gao H; Zhang H; Zhang X; Li W; Wang J; Gao F
    Neural Netw; 2023 Aug; 165():516-526. PubMed ID: 37348432
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations.
    Allyn J; Allou N; Vidal C; Renou A; Ferdynus C
    Medicine (Baltimore); 2020 Dec; 99(50):e23568. PubMed ID: 33327315
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial Attacks on Medical Image Classification.
    Tsai MJ; Lin PY; Lee ME
    Cancers (Basel); 2023 Aug; 15(17):. PubMed ID: 37686504
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.
    Deng Y; Karam LJ
    IEEE Trans Image Process; 2022; 31():5856-5868. PubMed ID: 36054395
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Universal adversarial perturbations for CNN classifiers in EEG-based BCIs.
    Liu Z; Meng L; Zhang X; Fang W; Wu D
    J Neural Eng; 2021 Jul; 18(4):. PubMed ID: 34181585
    [No Abstract]   [Full Text] [Related]  

  • 11. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images.
    Haque SBU; Zafar A
    J Imaging Inform Med; 2024 Feb; 37(1):308-338. PubMed ID: 38343214
    [TBL] [Abstract][Full Text] [Related]  

  • 12. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA).
    Maliamanis TV; Apostolidis KD; Papakostas GA
    Biomedicines; 2022 Oct; 10(10):. PubMed ID: 36289807
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.
    Wang Z; Shu X; Wang Y; Feng Y; Zhang L; Yi Z
    IEEE Trans Cybern; 2023 Aug; 53(8):5323-5335. PubMed ID: 36240037
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Compressive imaging for defending deep neural networks from adversarial attacks.
    Kravets V; Javidi B; Stern A
    Opt Lett; 2021 Apr; 46(8):1951-1954. PubMed ID: 33857114
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 17. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning.
    Apostolidis KD; Papakostas GA
    J Imaging; 2022 May; 8(6):. PubMed ID: 35735954
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Adversarial Examples: Attacks and Defenses for Deep Learning.
    Yuan X; He P; Zhu Q; Li X
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2805-2824. PubMed ID: 30640631
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 15.