These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

153 related articles for article (PubMed ID: 35763880)

  • 1. A universal adversarial policy for text classifiers.
    Maimon G; Rokach L
    Neural Netw; 2022 Sep; 153():282-291. PubMed ID: 35763880
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Universal adversarial examples and perturbations for quantum classifiers.
    Gong W; Deng DL
    Natl Sci Rev; 2022 Jun; 9(6):nwab130. PubMed ID: 36590599
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning.
    Minagi A; Hirano H; Takemoto K
    J Imaging; 2022 Feb; 8(2):. PubMed ID: 35200740
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Towards Adversarial Robustness with Early Exit Ensembles.
    Qendro L; Mascolo C
    Annu Int Conf IEEE Eng Med Biol Soc; 2022 Jul; 2022():313-316. PubMed ID: 36086386
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Improving the robustness and accuracy of biomedical language models through adversarial training.
    Moradi M; Samwald M
    J Biomed Inform; 2022 Aug; 132():104114. PubMed ID: 35717011
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial attacks against supervised machine learning based network intrusion detection systems.
    Alshahrani E; Alghazzawi D; Alotaibi R; Rabie O
    PLoS One; 2022; 17(10):e0275971. PubMed ID: 36240162
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Frequency-Tuned Universal Adversarial Attacks on Texture Recognition.
    Deng Y; Karam LJ
    IEEE Trans Image Process; 2022; 31():5856-5868. PubMed ID: 36054395
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Cross-Modal Search for Social Networks via Adversarial Learning.
    Zhou N; Du J; Xue Z; Liu C; Li J
    Comput Intell Neurosci; 2020; 2020():7834953. PubMed ID: 32733547
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.
    Chen S; He Z; Sun C; Yang J; Huang X
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):2188-2197. PubMed ID: 33095710
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Hierarchical gated recurrent neural network with adversarial and virtual adversarial training on text classification.
    Poon HK; Yap WS; Tee YK; Lee WK; Goi BM
    Neural Netw; 2019 Nov; 119():299-312. PubMed ID: 31499354
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Transferability of features for neural networks links to adversarial attacks and defences.
    Kotyan S; Matsuki M; Vargas DV
    PLoS One; 2022; 17(4):e0266060. PubMed ID: 35476838
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Generative Perturbation Network for Universal Adversarial Attacks on Brain-Computer Interfaces.
    Jung J; Moon H; Yu G; Hwang H
    IEEE J Biomed Health Inform; 2023 Nov; 27(11):5622-5633. PubMed ID: 37556336
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems.
    Anastasiou T; Karagiorgou S; Petrou P; Papamartzivanos D; Giannetsos T; Tsirigotaki G; Keizer J
    Sensors (Basel); 2022 Sep; 22(18):. PubMed ID: 36146258
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary.
    Kotyan S; Vargas DV
    PLoS One; 2022; 17(4):e0265723. PubMed ID: 35421125
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Universal adversarial perturbations for CNN classifiers in EEG-based BCIs.
    Liu Z; Meng L; Zhang X; Fang W; Wu D
    J Neural Eng; 2021 Jul; 18(4):. PubMed ID: 34181585
    [No Abstract]   [Full Text] [Related]  

    [Next]    [New Search]
    of 8.