BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

267 related articles for article (PubMed ID: 33743319)

  • 1. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.
    Tavakoli M; Agostinelli F; Baldi P
    Neural Netw; 2021 Aug; 140():1-12. PubMed ID: 33743319
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Parametric Deformable Exponential Linear Units for deep neural networks.
    Cheng Q; Li H; Wu Q; Ma L; Ngan KN
    Neural Netw; 2020 May; 125():281-289. PubMed ID: 32151915
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification.
    Wang D; Jin W; Wu Y
    Sensors (Basel); 2023 Mar; 23(6):. PubMed ID: 36991962
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Interpolated Adversarial Training: Achieving robust neural networks without sacrificing too much accuracy.
    Lamb A; Verma V; Kawaguchi K; Matyasko A; Khosla S; Kannala J; Bengio Y
    Neural Netw; 2022 Oct; 154():218-233. PubMed ID: 35930854
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 7. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks.
    Liu F; Xu M; Li G; Pei J; Shi L; Zhao R
    Neural Netw; 2021 Jan; 133():148-156. PubMed ID: 33217683
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Training Robust Deep Neural Networks via Adversarial Noise Propagation.
    Liu A; Liu X; Yu H; Zhang C; Liu Q; Tao D
    IEEE Trans Image Process; 2021; 30():5769-5781. PubMed ID: 34161231
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Implicit adversarial data augmentation and robustness with Noise-based Learning.
    Panda P; Roy K
    Neural Netw; 2021 Sep; 141():120-132. PubMed ID: 33894652
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary.
    Kotyan S; Vargas DV
    PLoS One; 2022; 17(4):e0265723. PubMed ID: 35421125
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Towards improving fast adversarial training in multi-exit network.
    Chen S; Shen H; Wang R; Wang X
    Neural Netw; 2022 Jun; 150():1-11. PubMed ID: 35279625
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Discovering Parametric Activation Functions.
    Bingham G; Miikkulainen R
    Neural Netw; 2022 Apr; 148():48-65. PubMed ID: 35066417
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.
    Tao Q; Liao J; Zhang E; Li L
    Neural Netw; 2024 Jul; 175():106276. PubMed ID: 38599138
    [TBL] [Abstract][Full Text] [Related]  

  • 19. On the robustness of skeleton detection against adversarial attacks.
    Bai X; Yang M; Liu Z
    Neural Netw; 2020 Dec; 132():416-427. PubMed ID: 33022470
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples.
    Mahmood K; Gurevin D; van Dijk M; Nguyen PH
    Entropy (Basel); 2021 Oct; 23(10):. PubMed ID: 34682083
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 14.