Semantics-enhanced adversarial nets for text-to-image synthesis
This paper presents a new model, Semantics-enhanced Generative Adversarial Network (SEGAN), for fine-grained text-to-image generation. We introduce two modules, a Semantic Consistency Module (SCM) and an Attention Competition Module (ACM), to our SEGAN. The SCM incorporates image-level semantic consistency into the training of the Generative Adversarial Network (GAN), and can diversify the generated images and improve their structural coherence. A Siamese network and two types of semantic similarities are designed to map the synthesized image and the groundtruth image to nearby points in the latent semantic feature space. The ACM constructs adaptive attention weights to differentiate keywords from unimportant words, and improves the stability and accuracy of SEGAN. Extensive experiments demonstrate that our SEGAN significantly outperforms existing state-of-the-art methods in generating photo-realistic images. All source codes and models will be released for comparative study.
Publication Source (Journal or Book title)
Proceedings of the IEEE International Conference on Computer Vision
Tan, H., Liu, X., Li, X., Zhang, Y., & Yin, B. (2019). Semantics-enhanced adversarial nets for text-to-image synthesis. Proceedings of the IEEE International Conference on Computer Vision, 2019-October, 10500-10509. https://doi.org/10.1109/ICCV.2019.01060