Open Access
ITM Web Conf.
Volume 54, 2023
2nd International Conference on Advances in Computing, Communication and Security (I3CS-2023)
Article Number 01013
Number of page(s) 12
Section Computing
Published online 04 July 2023
  1. G. Liu, F. A. Reda, K. J. Shih, T. Wang, A. Tao, and B. Catanzaro, Image Inpainting for Irregular Holes Using Partial Convolutions, vol. 11206. Springer International Publishing, 2018. [Google Scholar]
  2. F. Jay, J.-P. Renou, O. Voinnet, and L. Navarro, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Jun-Yan,” Proc. IEEE Int. Conf. Comput. Vis., pp. 183–202, 2017, [Online]. Available: [Google Scholar]
  3. J. Y. T. Park, Liu M.Y., Wang T.C., Zhu, “GauGAN: semantic image synthesis with spatially adaptive normalization//ACM SIGGRAPH 2019”. - July, 2019. - DOI: 10.1145/3306305.3332370, p. 2019, 2019. [Google Scholar]
  4. E. Knut Nicolaus and Christine Westphal, “The Restoration of Paintings.,” pp. 465–469, 1999, doi: 10.1109/ICIEV.2018.8641016. [Google Scholar]
  5. A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and transfer,” Proc. 28th Annu. Conf. Comput. Graph. Interact. Tech., no. August, pp. 341–346, 2005, doi: 10.1145/383259.383296. [Google Scholar]
  6. A. Levin, A. Zomet, and Y. Weiss, “Learning how to inpaint from global image statistics,” Proc. IEEE Int. Conf. Comput. Vis., vol. 1, pp. 305–312, 2003, doi: 10.1109/iccv.2003.1238360. [CrossRef] [Google Scholar]
  7. C. Ballester, V. Caselles, and J. Verdera, “Disocclusion by joint interpolation of vector fields and gray levels,” Multiscale Model. Simul., vol. 2, no. 1, pp. 80–123, 2004, doi: 10.1137/S1540345903422458. [Google Scholar]
  8. A. Telea, “An Image Inpainting Technique Based on the Fast Marching Method,” J. Graph. Tools, vol. 9, no. 1, pp. 23–34, 2004, doi: 10.1080/10867651.2004.10487596. [CrossRef] [Google Scholar]
  9. C. B. Marcelo Bertalmio, Guillermo Sapiro, Vicent Caselles, “Image inpainting,” Proc. 27th Annu. Conf. Comput. Graph. Interact. Tech., pp. 417–424, 2000, doi: 10.1055/s-0031-1298199. [Google Scholar]
  10. C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “PatchMatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28, no. 3, pp. 1–12, 2009, doi: 10.1145/1531326.1531330. [CrossRef] [Google Scholar]
  11. C. Ballester, V. Caselles, and J. Verdera, “Disocclusion by Joint Interpolation of Vector Fields and Gray Levels,” IEEE Trans. Image Process., vol. 10, no. 8, pp. 1200–1210, 2001, doi: 10.1137/s1540345903422458. [CrossRef] [MathSciNet] [Google Scholar]
  12. J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” Adv. Neural Inf. Process. Syst., vol. 1, no. January 2012, pp. 341–349, 2012. [Google Scholar]
  13. G. Liu, F. A. Reda, K. J. Shih, T. C. Wang, A. Tao, and B. Catanzaro, Image Inpainting for Irregular Holes Using Partial Convolutions, vol. 11215 LNCS. Springer International Publishing, 2018. [Google Scholar]
  14. L. Xu, J. S. J. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” Adv. Neural Inf. Process. Syst., vol. 2, no. January, pp. 1790–1798, 2014. [Google Scholar]
  15. S. W. Zamir et al., “Multi-Stage Progressive Image Restoration,” 2021 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, [Online]. Available: [Google Scholar]
  16. L. Gatys, A. Ecker, and M. Bethge, “A Neural Algorithm of Artistic Style,” J. Vis., vol. 16, no. 12, p. 326, 2016, doi: 10.1167/16.12.326. [CrossRef] [Google Scholar]
  17. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9906 LNCS, pp. 694–711, 2016, doi: 10.1007/978-3-319-46475-6_43. [Google Scholar]
  18. Y. Zeng, J.C.A. van der Lubbe, and M. Loog, “Multi-scale convolutional neural network for pixel-wise reconstruction of Van Gogh’s drawings,” Mach. Vis. Appl., vol. 30, no. 7-8, pp. 1229–1241, 2019, doi: 10.1007/s00138-019-01047-3. [CrossRef] [Google Scholar]
  19. V. Gupta, N. Sambyal, A. Sharma, and P. Kumar, “Restoration of artwork using deep neural networks,” Evol. Syst., vol. 12, no. 2, pp. 439–446, 2021, doi: 10.1007/s12530-019-09303-7. [CrossRef] [Google Scholar]
  20. K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 2980–2988, 2017, doi: 10.1109/ICCV.2017.322. [Google Scholar]
  21. Z. Zou, P. Zhao, and X. Zhao, “Virtual restoration of the colored paintings on weathered beams in the Forbidden City using multiple deep learning algorithms,” Adv. Eng. Informatics, vol. 50, no. March, p. 101421, 2021, doi: 10.1016/j.aei.2021.101421. [CrossRef] [Google Scholar]
  22. J. Cao, Z. Zhang, A. Zhao, H. Cui, and Q. Zhang, “Ancient mural restoration based on a modified generative adversarial network,” Herit. Sci., vol. 8, no. 1, pp. 1–14, 2020, doi: 10.1186/s40494-020-0355-x. [Google Scholar]
  23. J. Li, H. Wang, Z. Deng, M. Pan, and H. Chen, “Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator-discriminator network,” Herit. Sci., vol. 9, no. 1, pp. 1–14, 2021, doi: 10.1186/s40494-020-00478-w. [CrossRef] [Google Scholar]
  24. P. Kumar and V. Gupta, “Restoration of damaged artworks based on a generative adversarial network,” Multimed. Tools Appl., no. 0123456789, 2023, doi: 10.1007/s11042-023-15222-2. [Google Scholar]
  25. Z. Zou, P. Zhao, and X. Zhao, “Automatic segmentation , inpainting , and classification of defective patterns on ancient architecture using multiple deep learning algorithms,” no. March, pp. 1–18, 2021, doi: 10.1002/stc.2742. [Google Scholar]
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90. [Google Scholar]
  27. G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger, “Densely connected convolutional networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243. [CrossRef] [Google Scholar]
  28. N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Lect. Notes Comput. Sci. vol. 9351, pp. 234–241, 2015, doi: 10.1007/978-3-319-24574-4. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.