These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Uni-image: Universal image construction for robust neural model.
    Author: Ho J, Lee BG, Kang DK.
    Journal: Neural Netw; 2020 Aug; 128():279-287. PubMed ID: 32454372.
    Abstract:
    Deep neural networks have shown high performance in prediction, but they are defenseless when they predict on adversarial examples which are generated by adversarial attack techniques. In image classification, those attack techniques usually perturb the pixel of an image to fool the deep neural networks. To improve the robustness of the neural networks, many researchers have introduced several defense techniques against those attack techniques. To the best of our knowledge, adversarial training is one of the most effective defense techniques against the adversarial examples. However, the defense technique could fail against a semantic adversarial image that performs arbitrary perturbation to fool the neural networks, where the modified image semantically represents the same object as the original image. Against this background, we propose a novel defense technique, Uni-Image Procedure (UIP) method. UIP generates a universal-image (uni-image) from a given image, which can be a clean image or a perturbed image by some attacks. The generated uni-image preserves its own characteristics (i.e. color) regardless of the transformations of the original image. Note that those transformations include inverting the pixel value of an image, modifying the saturation, hue, and value of an image, etc. Our experimental results using several benchmark datasets show that our method not only defends well known adversarial attacks and semantic adversarial attack but also boosts the robustness of the neural network.
    [Abstract] [Full Text] [Related] [New Search]