These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

142 related articles for article (PubMed ID: 36590599)

  • 1. Universal adversarial examples and perturbations for quantum classifiers.
    Gong W; Deng DL
    Natl Sci Rev; 2022 Jun; 9(6):nwab130. PubMed ID: 36590599
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Experimental quantum adversarial learning with programmable superconducting qubits.
    Ren W; Li W; Xu S; Wang K; Jiang W; Jin F; Zhu X; Chen J; Song Z; Zhang P; Dong H; Zhang X; Deng J; Gao Y; Zhang C; Wu Y; Zhang B; Guo Q; Li H; Wang Z; Biamonte J; Song C; Deng DL; Wang H
    Nat Comput Sci; 2022 Nov; 2(11):711-717. PubMed ID: 38177368
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A universal adversarial policy for text classifiers.
    Maimon G; Rokach L
    Neural Netw; 2022 Sep; 153():282-291. PubMed ID: 35763880
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Experimental demonstration of adversarial examples in learning topological phases.
    Zhang H; Jiang S; Wang X; Zhang W; Huang X; Ouyang X; Yu Y; Liu Y; Deng DL; Duan LM
    Nat Commun; 2022 Aug; 13(1):4993. PubMed ID: 36008401
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems.
    Wang S; Cao Y; Chen X; Yao L; Wang X; Sheng QZ
    Front Big Data; 2022; 5():822783. PubMed ID: 35592793
    [TBL] [Abstract][Full Text] [Related]  

  • 6. How adversarial attacks can disrupt seemingly stable accurate classifiers.
    Sutton OJ; Zhou Q; Tyukin IY; Gorban AN; Bastounis A; Higham DJ
    Neural Netw; 2024 Dec; 180():106711. PubMed ID: 39299037
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Adversarial Examples: Attacks and Defenses for Deep Learning.
    Yuan X; He P; Zhu Q; Li X
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2805-2824. PubMed ID: 30640631
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations.
    Mopuri KR; Ganeshan A; Babu RV
    IEEE Trans Pattern Anal Mach Intell; 2019 Oct; 41(10):2452-2465. PubMed ID: 30072314
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Vulnerability of classifiers to evolutionary generated adversarial examples.
    Vidnerová P; Neruda R
    Neural Netw; 2020 Jul; 127():168-181. PubMed ID: 32361547
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks.
    Theagarajan R; Bhanu B
    IEEE Trans Pattern Anal Mach Intell; 2022 Dec; 44(12):9503-9520. PubMed ID: 34748482
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Adversarial Machine Learning for NextG Covert Communications Using Multiple Antennas.
    Kim B; Sagduyu Y; Davaslioglu K; Erpek T; Ulukus S
    Entropy (Basel); 2022 Jul; 24(8):. PubMed ID: 36010711
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images.
    Haque SBU; Zafar A
    J Imaging Inform Med; 2024 Feb; 37(1):308-338. PubMed ID: 38343214
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Adversarial Examples for Hamming Space Search.
    Yang E; Liu T; Deng C; Tao D
    IEEE Trans Cybern; 2020 Apr; 50(4):1473-1484. PubMed ID: 30561358
    [TBL] [Abstract][Full Text] [Related]  

  • 14. PSAT-GAN: Efficient Adversarial Attacks Against Holistic Scene Understanding.
    Wang L; Yoon KJ
    IEEE Trans Image Process; 2021; 30():7541-7553. PubMed ID: 34449361
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences.
    Oregi I; Del Ser J; Pérez A; Lozano JA
    Neural Netw; 2020 Aug; 128():61-72. PubMed ID: 32442627
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Enhancing robustness in video recognition models: Sparse adversarial attacks and beyond.
    Mu R; Marcolino L; Ni Q; Ruan W
    Neural Netw; 2024 Mar; 171():127-143. PubMed ID: 38091756
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Approaching Adversarial Example Classification with Chaos Theory.
    Pedraza A; Deniz O; Bueno G
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286969
    [TBL] [Abstract][Full Text] [Related]  

  • 18. DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model.
    Liu R; Jin X; Hu D; Zhang J; Wang Y; Zhang J; Zhou W
    Front Neurorobot; 2023; 17():1129720. PubMed ID: 36845066
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Deep learning models for electrocardiograms are susceptible to adversarial attack.
    Han X; Hu Y; Foschini L; Chinitz L; Jankelson L; Ranganath R
    Nat Med; 2020 Mar; 26(3):360-363. PubMed ID: 32152582
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.
    Tuna OF; Catak FO; Eskil MT
    Multimed Tools Appl; 2022; 81(8):11479-11500. PubMed ID: 35221776
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.