These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: GIU-GANs: Global Information Utilization for Generative Adversarial Networks.
    Author: Tian Y, Gong X, Tang J, Su B, Liu X, Zhang X.
    Journal: Neural Netw; 2022 Aug; 152():487-498. PubMed ID: 35640370.
    Abstract:
    Recently, with the rapid development of artificial intelligence, image generation based on deep learning has advanced significantly. Image generation based on Generative Adversarial Networks (GANs) is a promising study. However, because convolutions are limited by spatial-agnostic and channel-specific, features extracted by conventional GANs based on convolution are constrained. Therefore, GANs cannot capture in-depth details per image. Moreover, straightforwardly stacking of convolutions causes too many parameters and layers in GANs, yielding a high overfitting risk. To overcome the abovementioned limitations, in this study, we propose a GANs called GIU-GANs (where Global Information Utilization: GIU). GIU-GANs leverages a new module called the GIU module, which integrates the squeeze-and-excitation module and involution to focus on global information via the channel attention mechanism, enhancing the generated image quality. Moreover, Batch Normalization (BN) inevitably ignores the representation differences among noise sampled by the generator and thus degrades the generated image quality. Thus, we introduce the representative BN to the GANs' architecture. The CIFAR-10 and CelebA datasets are employed to demonstrate the effectiveness of the proposed model. Numerous experiments indicate that the proposed model achieves state-of-the-art performance.
    [Abstract] [Full Text] [Related] [New Search]