These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

107 related articles for article (PubMed ID: 37981457)

  • 1. Learning a robust foundation model against clean-label data poisoning attacks at downstream tasks.
    Zhou T; Yan H; Han B; Liu L; Zhang J
    Neural Netw; 2024 Jan; 169():756-763. PubMed ID: 37981457
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Adversarial attacks against supervised machine learning based network intrusion detection systems.
    Alshahrani E; Alghazzawi D; Alotaibi R; Rabie O
    PLoS One; 2022; 17(10):e0275971. PubMed ID: 36240162
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Multidomain active defense: Detecting multidomain backdoor poisoned samples via ALL-to-ALL decoupling training without clean datasets.
    Ma B; Wang J; Wang D; Meng B
    Neural Netw; 2023 Nov; 168():350-362. PubMed ID: 37797397
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Towards Adversarial Robustness for Multi-Mode Data through Metric Learning.
    Khan S; Chen JC; Liao WH; Chen CS
    Sensors (Basel); 2023 Jul; 23(13):. PubMed ID: 37448021
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Effective Transfer Learning with Label-Based Discriminative Feature Learning.
    Kim G; Kang S
    Sensors (Basel); 2022 Mar; 22(5):. PubMed ID: 35271172
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.
    Bortsova G; González-Gonzalo C; Wetstein SC; Dubost F; Katramados I; Hogeweg L; Liefers B; van Ginneken B; Pluim JPW; Veta M; Sánchez CI; de Bruijne M
    Med Image Anal; 2021 Oct; 73():102141. PubMed ID: 34246850
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Backdoor attack and defense in federated generative adversarial network-based medical image synthesis.
    Jin R; Li X
    Med Image Anal; 2023 Dec; 90():102965. PubMed ID: 37804585
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Ensemble machine learning model trained on a new synthesized dataset generalizes well for stress prediction using wearable devices.
    Vos G; Trinh K; Sarnyai Z; Rahimi Azghadi M
    J Biomed Inform; 2023 Dec; 148():104556. PubMed ID: 38048895
    [TBL] [Abstract][Full Text] [Related]  

  • 9. LFighter: Defending against the label-flipping attack in federated learning.
    Jebreel NM; Domingo-Ferrer J; Sánchez D; Blanco-Justicia A
    Neural Netw; 2024 Feb; 170():111-126. PubMed ID: 37977088
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Towards evaluating the robustness of deep diagnostic models by adversarial attack.
    Xu M; Zhang T; Li Z; Liu M; Zhang D
    Med Image Anal; 2021 Apr; 69():101977. PubMed ID: 33550005
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Detection of Backdoors in Trained Classifiers Without Access to the Training Set.
    Xiang Z; Miller DJ; Kesidis G
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1177-1191. PubMed ID: 33326384
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS).
    Sheikh ZA; Singh Y; Singh PK; Gonçalves PJS
    Sensors (Basel); 2023 Jun; 23(12):. PubMed ID: 37420626
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Boosting the transferability of adversarial examples via stochastic serial attack.
    Hao L; Hao K; Wei B; Tang XS
    Neural Netw; 2022 Jun; 150():58-67. PubMed ID: 35305532
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A clinical text classification paradigm using weak supervision and deep representation.
    Wang Y; Sohn S; Liu S; Shen F; Wang L; Atkinson EJ; Amin S; Liu H
    BMC Med Inform Decis Mak; 2019 Jan; 19(1):1. PubMed ID: 30616584
    [TBL] [Abstract][Full Text] [Related]  

  • 15. S-CUDA: Self-cleansing unsupervised domain adaptation for medical image segmentation.
    Liu L; Zhang Z; Li S; Ma K; Zheng Y
    Med Image Anal; 2021 Dec; 74():102214. PubMed ID: 34464837
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Adversarial multi-source transfer learning in healthcare: Application to glucose prediction for diabetic people.
    De Bois M; El Yacoubi MA; Ammi M
    Comput Methods Programs Biomed; 2021 Feb; 199():105874. PubMed ID: 33333366
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Node injection for class-specific network poisoning.
    Sharma AK; Kukreja R; Kharbanda M; Chakraborty T
    Neural Netw; 2023 Sep; 166():236-247. PubMed ID: 37517358
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare.
    Mozaffari-Kermani M; Sur-Kolay S; Raghunathan A; Jha NK
    IEEE J Biomed Health Inform; 2015 Nov; 19(6):1893-905. PubMed ID: 25095272
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Uni-image: Universal image construction for robust neural model.
    Ho J; Lee BG; Kang DK
    Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Adversarial concept drift detection under poisoning attacks for robust data stream mining.
    Korycki Ł; Krawczyk B
    Mach Learn; 2022 Jun; ():1-36. PubMed ID: 35668720
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.