These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

184 related articles for article (PubMed ID: 31634825)

  • 21. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Adversarial example defense based on image reconstruction.
    Zhang YA; Xu H; Pei C; Yang G
    PeerJ Comput Sci; 2021; 7():e811. PubMed ID: 35036533
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Intraoperative margin assessment of human breast tissue in optical coherence tomography images using deep neural networks.
    Rannen Triki A; Blaschko MB; Jung YM; Song S; Han HJ; Kim SI; Joo C
    Comput Med Imaging Graph; 2018 Nov; 69():21-32. PubMed ID: 30172090
    [TBL] [Abstract][Full Text] [Related]  

  • 25. How to handle noisy labels for robust learning from uncertainty.
    Ji D; Oh D; Hyun Y; Kwon OM; Park MJ
    Neural Netw; 2021 Nov; 143():209-217. PubMed ID: 34157645
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Perturbation diversity certificates robust generalization.
    Qian Z; Zhang S; Huang K; Wang Q; Yi X; Gu B; Xiong H
    Neural Netw; 2024 Apr; 172():106117. PubMed ID: 38232423
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Recent Advances in Large Margin Learning.
    Guo Y; Zhang C
    IEEE Trans Pattern Anal Mach Intell; 2022 Oct; 44(10):7167-7174. PubMed ID: 34161238
    [TBL] [Abstract][Full Text] [Related]  

  • 29. The developmental trajectory of object recognition robustness: Children are like small adults but unlike big deep neural networks.
    Huber LS; Geirhos R; Wichmann FA
    J Vis; 2023 Jul; 23(7):4. PubMed ID: 37410494
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias.
    Wang J; Liu A; Bai X; Liu X
    IEEE Trans Image Process; 2022; 31():598-611. PubMed ID: 34851825
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses.
    Lau CP; Liu J; Souri H; Lin WA; Feizi S; Chellappa R
    IEEE Trans Pattern Anal Mach Intell; 2023 Nov; 45(11):13054-13067. PubMed ID: 37335791
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning.
    Minagi A; Hirano H; Takemoto K
    J Imaging; 2022 Feb; 8(2):. PubMed ID: 35200740
    [TBL] [Abstract][Full Text] [Related]  

  • 33. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Approaching Adversarial Example Classification with Chaos Theory.
    Pedraza A; Deniz O; Bueno G
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286969
    [TBL] [Abstract][Full Text] [Related]  

  • 35. A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization.
    Suryanto N; Kang H; Kim Y; Yun Y; Larasati HT; Kim H
    Sensors (Basel); 2020 Dec; 20(24):. PubMed ID: 33327453
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Clustering Approach for Detecting Multiple Types of Adversarial Examples.
    Choi SH; Bahk TU; Ahn S; Choi YH
    Sensors (Basel); 2022 May; 22(10):. PubMed ID: 35632235
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.
    Chen S; He Z; Sun C; Yang J; Huang X
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):2188-2197. PubMed ID: 33095710
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Sinkhorn Adversarial Attack and Defense.
    Subramanyam AV
    IEEE Trans Image Process; 2022; 31():4039-4049. PubMed ID: 35679377
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Stable tensor neural networks for efficient deep learning.
    Newman E; Horesh L; Avron H; Kilmer ME
    Front Big Data; 2024; 7():1363978. PubMed ID: 38873283
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Feature Distillation in Deep Attention Network Against Adversarial Examples.
    Chen X; Weng J; Deng X; Luo W; Lan Y; Tian Q
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3691-3705. PubMed ID: 34739380
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.