Open Access
| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 04021 | |
| Number of page(s) | 8 | |
| Section | Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies | |
| DOI | https://doi.org/10.1051/itmconf/20257804021 | |
| Published online | 08 September 2025 | |
- ‘What Are Deepfakes and How Are They Created?’, https://spectrum.ieee.org/electronic-health-records, accessed 1 April 2025 [Google Scholar]
- ‘FaceApp: Perfect Face Editor on the App Store’, https://apps.apple.com/gb/app/faceapp-perfect-face-editor/id1180884341, accessed 1 April 2025 [Google Scholar]
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: ‘Generative Adversarial Nets’, MIT Press, 2014, pp 2672–2680 [Google Scholar]
- Wang, J. Q., Zhang, K., Li, P. J.: ‘Review of face attribute synthesis techniques based on generative adversarial network’, Application Research of Computers., 2025, 42 (3), pp. 650–662 [Google Scholar]
- Moti, Z., Hashemi, S., Namavar, A.: ‘Discovering Future Malware Variants By Generating New Malware Samples Using Generative Adversarial Network’. International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 2019, pp. 319–324 [Google Scholar]
- Radford, A.: ‘Unsupervised representation learning with deep convolutional generative adversarial networks’, Computer ence, 2015 [Google Scholar]
- Arjovsky, M., Chintala, S., Bottou, L.: ‘Wasserstein generative adversarial networks’. Proceedings of the 34th International Conference on Machine Learning. Sydney, NSW, Australia, 2017, pp. 214–223 [Google Scholar]
- Tero, K., Timo, A.: ‘Progressive Growing of GANs for Improved Quality, Stability, and Variation’, http://arxiv.org/abs/1710.10196. 2017 [Google Scholar]
- Karras, T., Laine, S., Aila, T.: ‘A Style-Based Generator Architecture for Generative Adversarial Networks’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43, (12), pp. 4217–4228 [Google Scholar]
- Karras, T., Laine, S., Aittala, M., et al.: ‘Analyzing and Improving the Image Quality of StyleGAN’, 2019 [Google Scholar]
- Tero, K., Miika, A., Samuli, L., et al.: ‘Alias-free generative adversarial networks’. Curran Associates Inc., Red Hook, NY, USA, 2021, pp. 852–863 [Google Scholar]
- Yuan, Z., Zhang, J., Shan, S. G., et al.: ‘Attributes Aware Face Generation with Generative Adversarial Networks’. in 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021, pp. 1657–1664 [Google Scholar]
- Zhang, H., Xu, T., Li, H. S., et al.: ‘StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks’. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 5908–5916 [Google Scholar]
- Xu, T., Zhang, P. C., Huang, Q. Y. et al.: ‘AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks’. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 1316–1324 [Google Scholar]
- Li, B. W., Qi, X. J., Lukasiewicz, T., et al.: ‘Controllable text-to-image generation’. Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, 2019, pp. 2065–2075 [Google Scholar]
- Xia, W. H., Yang, Y. J., Xue, J. H., et al.: ‘Tedigan: Text-guided diverse face image generation and manipulation’. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 2256–2265 [Google Scholar]
- Guo, Q., Gu, X. D.: ‘Towards photorealistic face generation using text-guided Semantic-Spatial FaceGAN’, Multimed Tools Appl., 2024 [Google Scholar]
- Wang, Y. X., Zhou, W. G., Bao, J. M., et al.: ‘CLIP2GAN: Toward Bridging Text With the Latent Space of GANs’, IEEE Trans. Cir. and Sys. for Video Technol. 34, 2024, pp 6847–6859 [Google Scholar]
- Hou, Y. L., Zhang, W., Zhu, Z. L., et al.: ‘CLIP-GAN: Stacking CLIPs and GAN for Efficient and Controllable Text-to-Image Synthesis’, 2025, pp 1–15 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

