These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

121 related articles for article (PubMed ID: 38701599)

  • 1. Blinding and blurring the multi-object tracker with adversarial perturbations.
    Pang H; Ma R; Su J; Liu C; Gao Y; Jin Q
    Neural Netw; 2024 Aug; 176():106331. PubMed ID: 38701599
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 3. GLH: From Global to Local Gradient Attacks with High-Frequency Momentum Guidance for Object Detection.
    Chen Y; Yang H; Wang X; Wang Q; Zhou H
    Entropy (Basel); 2023 Mar; 25(3):. PubMed ID: 36981349
    [TBL] [Abstract][Full Text] [Related]  

  • 4. MBT3D: Deep learning based multi-object tracker for bumblebee 3D flight path estimation.
    Stiemer LN; Thoma A; Braun C
    PLoS One; 2023; 18(9):e0291415. PubMed ID: 37738269
    [TBL] [Abstract][Full Text] [Related]  

  • 5. STMMOT: Advancing multi-object tracking through spatiotemporal memory networks and multi-scale attention pyramids.
    Mukhtar H; Khan MUG
    Neural Netw; 2023 Nov; 168():363-379. PubMed ID: 37801917
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.
    Wang Z; Shu X; Wang Y; Feng Y; Zhang L; Yi Z
    IEEE Trans Cybern; 2023 Aug; 53(8):5323-5335. PubMed ID: 36240037
    [TBL] [Abstract][Full Text] [Related]  

  • 7. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.
    Miller D; Wang Y; Kesidis G
    Neural Comput; 2019 Aug; 31(8):1624-1670. PubMed ID: 31260390
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations.
    Allyn J; Allou N; Vidal C; Renou A; Ferdynus C
    Medicine (Baltimore); 2020 Dec; 99(50):e23568. PubMed ID: 33327315
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Differential evolution based dual adversarial camouflage: Fooling human eyes and object detectors.
    Sun J; Yao W; Jiang T; Wang D; Chen X
    Neural Netw; 2023 Jun; 163():256-271. PubMed ID: 37086543
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Extended Spatially Localized Perturbation GAN (eSLP-GAN) for Robust Adversarial Camouflage Patches.
    Kim Y; Kang H; Suryanto N; Larasati HT; Mukaroh A; Kim H
    Sensors (Basel); 2021 Aug; 21(16):. PubMed ID: 34450763
    [TBL] [Abstract][Full Text] [Related]  

  • 12. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 13. DEFEAT: Decoupled feature attack across deep neural networks.
    Huang L; Gao C; Liu N
    Neural Netw; 2022 Dec; 156():13-28. PubMed ID: 36228335
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Frequency constraint-based adversarial attack on deep neural networks for medical image classification.
    Chen F; Wang J; Liu H; Kong W; Zhao Z; Ma L; Liao H; Zhang D
    Comput Biol Med; 2023 Sep; 164():107248. PubMed ID: 37515875
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Pedestrian multiple-object tracking based on FairMOT and circle loss.
    Che J; He Y; Wu J
    Sci Rep; 2023 Mar; 13(1):4525. PubMed ID: 36941322
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Attention distraction with gradient sharpening for multi-task adversarial attack.
    Liu B; Hu J; Deng W
    Math Biosci Eng; 2023 Jun; 20(8):13562-13580. PubMed ID: 37679102
    [TBL] [Abstract][Full Text] [Related]  

  • 17. AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes.
    Li Y; Xiao Z; Yang L; Meng D; Zhou X; Fan H; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; PP():. PubMed ID: 38662556
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A Dual Robust Graph Neural Network Against Graph Adversarial Attacks.
    Tao Q; Liao J; Zhang E; Li L
    Neural Netw; 2024 Jul; 175():106276. PubMed ID: 38599138
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Remix: Towards the transferability of adversarial examples.
    Zhao H; Hao L; Hao K; Wei B; Cai X
    Neural Netw; 2023 Jun; 163():367-378. PubMed ID: 37119676
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.