These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

113 related articles for article (PubMed ID: 38652616)

  • 41. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Remix: Towards the transferability of adversarial examples.
    Zhao H; Hao L; Hao K; Wei B; Cai X
    Neural Netw; 2023 Jun; 163():367-378. PubMed ID: 37119676
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Training Robust Deep Neural Networks via Adversarial Noise Propagation.
    Liu A; Liu X; Yu H; Zhang C; Liu Q; Tao D
    IEEE Trans Image Process; 2021; 30():5769-5781. PubMed ID: 34161231
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.
    Chen L; Zhao L; Chen CY
    Med Phys; 2021 Oct; 48(10):6198-6212. PubMed ID: 34487364
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning.
    Apostolidis KD; Papakostas GA
    J Imaging; 2022 May; 8(6):. PubMed ID: 35735954
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Universal adversarial attacks on deep neural networks for medical image classification.
    Hirano H; Minagi A; Takemoto K
    BMC Med Imaging; 2021 Jan; 21(1):9. PubMed ID: 33413181
    [TBL] [Abstract][Full Text] [Related]  

  • 47. On the Robustness of Semantic Segmentation Models to Adversarial Attacks.
    Arnab A; Miksik O; Torr PHS
    IEEE Trans Pattern Anal Mach Intell; 2020 Dec; 42(12):3040-3053. PubMed ID: 31150338
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Online Alternate Generator against Adversarial Attacks.
    Li H; Zeng Y; Li G; Lin L; Yu Y
    IEEE Trans Image Process; 2020 Sep; PP():. PubMed ID: 32976100
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning.
    Yang J; Zheng J; Wang H; Li J; Sun H; Han W; Jiang N; Tan YA
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772101
    [TBL] [Abstract][Full Text] [Related]  

  • 50. Defense against adversarial attacks: robust and efficient compressed optimized neural networks.
    Kraidia I; Ghenai A; Belhaouari SB
    Sci Rep; 2024 Mar; 14(1):6420. PubMed ID: 38494519
    [TBL] [Abstract][Full Text] [Related]  

  • 51. On the role of deep learning model complexity in adversarial robustness for medical images.
    Rodriguez D; Nayak T; Chen Y; Krishnan R; Huang Y
    BMC Med Inform Decis Mak; 2022 Jun; 22(Suppl 2):160. PubMed ID: 35725429
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression.
    Chen X; Si Y; Zhang Z; Yang W; Feng J
    Sensors (Basel); 2024 May; 24(9):. PubMed ID: 38733060
    [TBL] [Abstract][Full Text] [Related]  

  • 53. Towards improving fast adversarial training in multi-exit network.
    Chen S; Shen H; Wang R; Wang X
    Neural Netw; 2022 Jun; 150():1-11. PubMed ID: 35279625
    [TBL] [Abstract][Full Text] [Related]  

  • 54. Advancing Adversarial Training by Injecting Booster Signal.
    Lee HJ; Yu Y; Ro YM
    IEEE Trans Neural Netw Learn Syst; 2024 Sep; 35(9):12665-12677. PubMed ID: 37058386
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary.
    Kotyan S; Vargas DV
    PLoS One; 2022; 17(4):e0265723. PubMed ID: 35421125
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Feature Distillation in Deep Attention Network Against Adversarial Examples.
    Chen X; Weng J; Deng X; Luo W; Lan Y; Tian Q
    IEEE Trans Neural Netw Learn Syst; 2023 Jul; 34(7):3691-3705. PubMed ID: 34739380
    [TBL] [Abstract][Full Text] [Related]  

  • 57. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 58. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA).
    Maliamanis TV; Apostolidis KD; Papakostas GA
    Biomedicines; 2022 Oct; 10(10):. PubMed ID: 36289807
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Defense Against Adversarial Attacks by Reconstructing Images.
    Zhang S; Gao H; Rao Q
    IEEE Trans Image Process; 2021; 30():6117-6129. PubMed ID: 34197323
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval.
    Chen B; Feng Y; Dai T; Bai J; Jiang Y; Xia ST; Wang X
    IEEE Trans Pattern Anal Mach Intell; 2023 Feb; 45(2):1388-1404. PubMed ID: 35380957
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 6.