These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 39123921)

  • 1. An Empirical Study on the Effect of Training Data Perturbations on Neural Network Robustness.
    Wang J; Wu Z; Lu M; Ai J
    Sensors (Basel); 2024 Jul; 24(15):. PubMed ID: 39123921
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Guiding the retraining of convolutional neural networks against adversarial inputs.
    Durán F; Martínez-Fernández S; Felderer M; Franch X
    PeerJ Comput Sci; 2023; 9():e1454. PubMed ID: 37705636
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Towards improving fast adversarial training in multi-exit network.
    Chen S; Shen H; Wang R; Wang X
    Neural Netw; 2022 Jun; 150():1-11. PubMed ID: 35279625
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Toward Intrinsic Adversarial Robustness Through Probabilistic Training.
    Dong J; Yang L; Wang Y; Xie X; Lai J
    IEEE Trans Image Process; 2023; 32():3862-3872. PubMed ID: 37428673
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial co-training for semantic segmentation over medical images.
    Xie H; Fu C; Zheng X; Zheng Y; Sham CW; Wang X
    Comput Biol Med; 2023 May; 157():106736. PubMed ID: 36958238
    [TBL] [Abstract][Full Text] [Related]  

  • 8. How adversarial attacks can disrupt seemingly stable accurate classifiers.
    Sutton OJ; Zhou Q; Tyukin IY; Gorban AN; Bastounis A; Higham DJ
    Neural Netw; 2024 Dec; 180():106711. PubMed ID: 39299037
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
    Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
    Front Big Data; 2022; 5():822783. PubMed ID: 35592793
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Perturbation diversity certificates robust generalization.
    Qian Z; Zhang S; Huang K; Wang Q; Yi X; Gu B; Xiong H
    Neural Netw; 2024 Apr; 172():106117. PubMed ID: 38232423
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adversarial Examples for Hamming Space Search.
    Yang E; Liu T; Deng C; Tao D
    IEEE Trans Cybern; 2020 Apr; 50(4):1473-1484. PubMed ID: 30561358
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Adversarial Robustness with Partial Isometry.
    Shi-Garrier L; Bouaynaya NC; Delahaye D
    Entropy (Basel); 2024 Jan; 26(2):. PubMed ID: 38392358
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations.
    Allyn J; Allou N; Vidal C; Renou A; Ferdynus C
    Medicine (Baltimore); 2020 Dec; 99(50):e23568. PubMed ID: 33327315
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Adversarial Attack and Defense in Deep Ranking.
    Zhou M; Wang L; Niu Z; Zhang Q; Zheng N; Hua G
    IEEE Trans Pattern Anal Mach Intell; 2024 Aug; 46(8):5306-5324. PubMed ID: 38349823
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity.
    Zhang C; Liu A; Liu X; Xu Y; Yu H; Ma Y; Li T
    IEEE Trans Image Process; 2021; 30():1291-1304. PubMed ID: 33290221
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Towards Adversarial Robustness with Early Exit Ensembles.
    Qendro L; Mascolo C
    Annu Int Conf IEEE Eng Med Biol Soc; 2022 Jul; 2022():313-316. PubMed ID: 36086386
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Defending Against Multiple and Unforeseen Adversarial Videos.
    Lo SY; Patel VM
    IEEE Trans Image Process; 2022; 31():962-973. PubMed ID: 34965207
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Analyzing the Noise Robustness of Deep Neural Networks.
    Cao K; Liu M; Su H; Wu J; Zhu J; Liu S
    IEEE Trans Vis Comput Graph; 2021 Jul; 27(7):3289-3304. PubMed ID: 31985427
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Improving the robustness and accuracy of biomedical language models through adversarial training.
    Moradi M; Samwald M
    J Biomed Inform; 2022 Aug; 132():104114. PubMed ID: 35717011
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.