Virtual View Generation by a New Hole Filling Algorithm

  • Ko, Min Soo ;
  • Yoo, Jisang
  • Received : 2013.09.23
  • Accepted : 2013.12.16
  • Published : 2014.05.01


In this paper, performance improved hole-filling algorithm which includes the boundary noise removing pre-process that can be used for an arbitrary virtual view synthesis has been proposed. Boundary noise occurs due to the boundary mismatch between depth and texture images during the 3D warping process and it usually causes unusual defects in a generated virtual view. Common-hole is impossible to recover by using only a given original view as a reference and most of the conventional algorithms generate unnatural views that include constrained parts of the texture. To remove the boundary noise, we first find occlusion regions and expand these regions to the common-hole region in the synthesized view. Then, we fill the common-hole using the spiral weighted average algorithm and the gradient searching algorithm. The spiral weighted average algorithm keeps the boundary of each object well by using depth information and the gradient searching algorithm preserves the details. We tried to combine strong points of both the spiral weighted average algorithm and the gradient searching algorithm. We also tried to reduce the flickering defect that exists around the filled common-hole region by using a probability mask. The experimental results show that the proposed algorithm performs much better than the conventional algorithms.


3D Video;Boundary noise;Common-hole filling;Flickering defect;Virtual view


  1. ISO/IEC JTC1/SC29/WG11, "Introduction to 3D video," M9784, May 2008.
  2. S. Zinger, L. Do, and P.H.N. de With, "Free-viewpoint depth image based rendering," J. Vis. Commun. Image Representation, vol.21, no.5-6, pp. 533-541, Jan. 2010.
  3. L. Azzari, F. Battisti, and A. Gotchev, "Comparative analysis of occlusion-filling techniques in depth image-based rendering for 3D videos," in Proc. ACM MoViD, pp.57-62, Oct. 2010.
  4. J. R. Ohm, "Stereo/Multi-view video encoding using the MPEG family of standards," Proc. of Electronic Imaging, Invited Paper, 1999.
  5. G.M. Um, G. Bang, N.H. Hur, and J.W. Kim, "Standardization trends of 3D video coding in MPEG," ETRI, Electronics and Telecommunications Trends, vol. 24, no. 3, pp. 61-68, Jun. 2009.
  6. C. Fehn, "Depth-image-based Rendering (DIBR), compression and transmission for a new approach on 3D-TV," in Proc. SPIE 5291, pp. 93-104, Jan. 2004.
  7. M. Bertalmio, G. Sapiro, V. Caselles and C. Ballester, "Image inpainting," Proc. of 27th Conference Computer Graphics and Interactive Techniques (ACM SIGGRAPH 2000), pp. 417-424, July 2000.
  8. A. Criminisi, P. Perez, and K. Toyama, "Region filling and object removal by exemplar-based image inpainting," IEEE Trans. Image Processing, vol. 13, no. 9, pp. 1200-1212, Sept. 2004.
  9. A. Telea, "An image in-painting technique based on the fast marching method," J. Graphics Tools, vol.9, no. 1, pp.25-36, 2004.
  10. ISO/IEC JTC1/SC29/WG11, "Boundary noise removal and hole filling for VSRS 3.5 alpha," M19992, Mar. 2011.
  11. ISO/IEC JTC1/SC29/WG11, "Implementation of Hole Filling Methods for VSRS 3.5.alpha," M20005, Mar. 2011.
  12. Y. J. Kim, S. H. Lee, and J. I. Park, "A high-quality occlusion filling method using image inpainting," Journal of broadcasting engineering, vol. 15, no. 1, pp. 3-13, Jan. 2010.
  13. I. Daribo and H. Saito, "A novel inpainting-based layered depth video for 3DTV," IEEE Trans. Broadcasting, vol.57, no.2, pp.533-541, June 2011.
  14. Y. Mori, N. Fukushima, T. Yendoa, T. Fujiiand and M. Tanimotoa, "View generation with 3D warping using depth information for FTV", ELSEVIER, Signal Processing: Image Communication, vol.24, issue.1-2, pp.65-72, Jan. 2009.
  15. M. S. Ko, D. W. Kim and J. S. Yoo, "A new commonhole filling algorithm for virtual view synthesis with a probability mask", 11th IEEE IVMSP Workshop: 3D Image/Video Technologies and Applications, June 2013.

Cited by

  1. Filling Disocclusions in Extrapolated Virtual Views Using Hybrid Texture Synthesis vol.62, pp.2, 2016,