These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

259 related articles for article (PubMed ID: 35221776)

  • 1. Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.
    Tuna OF; Catak FO; Eskil MT
    Multimed Tools Appl; 2022; 81(8):11479-11500. PubMed ID: 35221776
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
    Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
    Front Big Data; 2022; 5():822783. PubMed ID: 35592793
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers.
    Cao H; Si C; Sun Q; Liu Y; Li S; Gope P
    Entropy (Basel); 2022 Mar; 24(3):. PubMed ID: 35327923
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adversarial example defense based on image reconstruction.
    Zhang YA; Xu H; Pei C; Yang G
    PeerJ Comput Sci; 2021; 7():e811. PubMed ID: 35036533
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Perturbing BEAMs: EEG adversarial attack to deep learning models for epilepsy diagnosing.
    Yu J; Qiu K; Wang P; Su C; Fan Y; Cao Y
    BMC Med Inform Decis Mak; 2023 Jul; 23(1):115. PubMed ID: 37415186
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach.
    Alkhowaiter M; Kholidy H; Alyami MA; Alghamdi A; Zou C
    Sensors (Basel); 2023 Jul; 23(14):. PubMed ID: 37514582
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning.
    Wang H; Li G; Liu X; Lin L
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):1725-1737. PubMed ID: 33074803
    [TBL] [Abstract][Full Text] [Related]  

  • 10. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Toward Intrinsic Adversarial Robustness Through Probabilistic Training.
    Dong J; Yang L; Wang Y; Xie X; Lai J
    IEEE Trans Image Process; 2023; 32():3862-3872. PubMed ID: 37428673
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Adversarial attacks against supervised machine learning based network intrusion detection systems.
    Alshahrani E; Alghazzawi D; Alotaibi R; Rabie O
    PLoS One; 2022; 17(10):e0275971. PubMed ID: 36240162
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Approaching Adversarial Example Classification with Chaos Theory.
    Pedraza A; Deniz O; Bueno G
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286969
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adv-BDPM: Adversarial attack based on Boundary Diffusion Probability Model.
    Zhang D; Dong Y
    Neural Netw; 2023 Oct; 167():730-740. PubMed ID: 37729788
    [TBL] [Abstract][Full Text] [Related]  

  • 15. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks.
    Hwang RH; Lin JY; Hsieh SY; Lin HY; Lin CL
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679651
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Training Robust Deep Neural Networks via Adversarial Noise Propagation.
    Liu A; Liu X; Yu H; Zhang C; Liu Q; Tao D
    IEEE Trans Image Process; 2021; 30():5769-5781. PubMed ID: 34161231
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images.
    Haque SBU; Zafar A
    J Imaging Inform Med; 2024 Feb; 37(1):308-338. PubMed ID: 38343214
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).
    Sheikh ZA; Singh Y; Singh PK; Gonçalves PJS
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420626
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 13.