These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

155 related articles for article (PubMed ID: 33886479)

  • 1. Breaking Neural Reasoning Architectures With Metamorphic Relation-Based Adversarial Examples.
    Chan A; Ma L; Juefei-Xu F; Ong YS; Xie X; Xue M; Liu Y
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6976-6982. PubMed ID: 33886479
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors.
    Kurniawan A; Ohsita Y; Murata M
    Sensors (Basel); 2022 Nov; 22(22):. PubMed ID: 36433250
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Unifying neural learning and symbolic reasoning for spinal medical report generation.
    Han Z; Wei B; Xi X; Chen B; Yin Y; Li S
    Med Image Anal; 2021 Jan; 67():101872. PubMed ID: 33142134
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition.
    Lal S; Rehman SU; Shah JH; Meraj T; Rauf HT; Damaševičius R; Mohammed MA; Abdulkareem KH
    Sensors (Basel); 2021 Jun; 21(11):. PubMed ID: 34200216
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Toward Intrinsic Adversarial Robustness Through Probabilistic Training.
    Dong J; Yang L; Wang Y; Xie X; Lai J
    IEEE Trans Image Process; 2023; 32():3862-3872. PubMed ID: 37428673
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary.
    Kotyan S; Vargas DV
    PLoS One; 2022; 17(4):e0265723. PubMed ID: 35421125
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
    Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
    IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.
    Anastasiou T; Karagiorgou S; Petrou P; Papamartzivanos D; Giannetsos T; Tsirigotaki G; Keizer J
    Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146258
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Transferability of features for neural networks links to adversarial attacks and defences.
    Kotyan S; Matsuki M; Vargas DV
    PLoS One; 2022; 17(4):e0266060. PubMed ID: 35476838
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial and Random Transformations for Robust Domain Adaptation and Generalization.
    Xiao L; Xu J; Zhao D; Shang E; Zhu Q; Dai B
    Sensors (Basel); 2023 Jun; 23(11):. PubMed ID: 37300000
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Boosting adversarial robustness via self-paced adversarial training.
    He L; Ai Q; Yang X; Ren Y; Wang Q; Xu Z
    Neural Netw; 2023 Oct; 167():706-714. PubMed ID: 37729786
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Learning defense transformations for counterattacking adversarial examples.
    Li J; Zhang S; Cao J; Tan M
    Neural Netw; 2023 Jul; 164():177-185. PubMed ID: 37149918
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Universal Detection Method for Adversarial Examples and Fake Images.
    Lai J; Huo Y; Hou R; Wang X
    Sensors (Basel); 2022 Apr; 22(9):. PubMed ID: 35591134
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Improving the robustness and accuracy of biomedical language models through adversarial training.
    Moradi M; Samwald M
    J Biomed Inform; 2022 Aug; 132():104114. PubMed ID: 35717011
    [TBL] [Abstract][Full Text] [Related]  

  • 20. GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization.
    Lee S; Kim H; Lee J
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):2645-2651. PubMed ID: 35446760
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.