In contrast to the box office success of the 3D movies, 3D broadcasting, which has been labeled as the next generation broadcasting, has not yet been able to find its place in the market. According to the data from Retrevo, a market analysis agency, 55% of the consumers who plan on buying a HDTV feel no need for the 3D function because of the cumbersome task of wearing 3D glasses and the lack of content . Also, the current 3D display method (stereo 3D: S3D) uses only one viewpoint to synthesize the 3D images so the realistic feeling and the vividness of the object lessens when seen from another viewpoint. The alternative to this stereo 3D image is the multi-view display technique that does not require 3D glasses. This alternative offers a more realistic viewing because it provides more viewpoints than the stereo display. Therefore, the viewer can enjoy the 3D image from any perspective .
There are many ways to obtain the multi view image but the simplest way is to install as many cameras as the viewpoints needed in order to obtain the images from each view . However, this method lacks practicality because of the difficulty of calibrating the cameras and the high expenses of the cameras themselves.
Therefore, other methods of generating the multi-view image have been researched. The first alternative uses stereo matching algorithm in order to obtain the depth image from the stereo images . The other alternative involves the use of the depth camera which allows the color and depth image to be obtained simultaneously . Because stereo matching is relatively robust to environment, it is easy to obtain the disparity map. However, it takes a long execution time and lacks in accuracy of depth image. Depth camera enables for the obtainment of depth images with high accuracy but has low resolution and high equipment costs.
In this paper, we propose the method of disparity refinement near the object boundaries for virtual-view quality enhancement. The disparity map is obtained using stereo matching. However, this results in noise near the object boundaries because of discontinuity. In order to improve the quality of the virtual view image, the disparity map refinement is necessary. First, the error region is detected by investigating the consistency between left and right disparity maps . In the occlusion region, the region in the left image does not exist in the right image . Therefore, error is prone to happen because of the difficulty in getting the disparity information.
This phenomenon creates a larger range of error as the resolution of the image becomes higher because the arithmetic operation gets more complicated . The optical flow algorithm is applied to the textural component of the image, which has structural characteristics of the image, in order to extract occlusion region . The texture component has the structural characteristics of the image. It shows good performance because light, noise, and shadow have been almost removed when extracting motion information.
The motion information of all the pixels is extracted by Lucas-Kanade method, which is a dense optical flow algorithm . Using the extracted motion information, the consistency of the left and right image is investigated. Then, for the pixels that do not consistent, they are considered non-existent and defined as occlusion region. The error and occlusion region of the extracted left and right disparity maps are fused to create a new region. The new labeled regions are filled with the appropriate disparity values by using the joint bilateral filter that conserves object boundaries in the reference image . Finally, refined disparity map is used to synthesize the virtual-view image by using bidirectional linear interpolation .
This paper is organized as follows. Section 2 explains the error regions detection of the disparity map. Section 3 describes the extraction of the occlusion region using the optical flow. The performance of the proposed algorithm is given through experiments in section 4. Finally, section 5 contains the conclusion
2. The Error Regions Detection of the Disparity Map
Stereo matching algorithm can only work on the premise that the randomly picked pixel values of the left image exist on the right image. However, depending on the viewpoint of the two cameras, the lighting and the amount of reflected light changes. That allows for the same points on the left and right image to have different pixel values. Additionally, if a certain region has identical pixel values, finding the pixel that corresponds with that region becomes difficult. Therefore, the possibility of extracting incorrect information is high. The same error occurs in the occlusion region which only exists in one image and does not exist in other regions. Thus, in order to improve the quality of the virtual-view image, the disparity map information has to be accurate. This section will explain how to detect the error region by investigating the consistency of the disparity map extracted using the stereo matching.
Fig. 1(c) and Fig. 1(d) shows the disparity map obtained using the stereo matching. There is a possibility of detecting the wrong disparity information because the fluctuation range of the disparity values near the object boundaries is high. As the resolution of the image gets higher, the arithmetic operations get more complicated and the range of error gets wider. The wrong disparity value can be detected by calculating the consistency of the left and right disparity map as shown in Eq. (1) and Eq. (2).
Fig. 1.The disparity map extracted using stereo matching: (a) Left color image; (b) Right color image; (c) Left disparity map; (d) Right disparity map
where xl and xr are the coordinates of the left and right image. dl and dr represent the left and right disparity maps respectively. If c(x)=0, the disparity value of the corresponding coordinate is consistent. If c(x)≠0, the corresponding coordinate has the wrong disparity value.
Fig. 2 represents the image extracted from the error region of the disparity map. It shows that the error is mostly detected near the object boundaries.
Fig. 2.The error region extraction: (a) The left disparity map; (b) The right disparity map
3. Occlusion Extraction using Optical Flow
Optical flow is the method of tracing the motion within two frames. There is a sparse type which only traces the region with the noticeable properties as the object’s boundary and a dense type which obtains the movement information of every pixel in the image.
In this paper, Lucas-Kanade method, which is a dense optical flow, is used to trace motion. After investigating the consistency of the extracted motion information from the left and right images, the pixels that do not match are regarded as the occlusion region which exists in the current image but does not exist in the other. Also optical flow is applied to the texture component of the image in order to improve the quality of the optical flow. Fig. 3 represents the flow chart of the proposed occlusion extraction.
Fig. 3.Block diagram of the occlusion extraction
3.1 Extraction of texture component
Generally, an image can be separated into the structure and texture components. The structure component represents the object’s appearance, color and etc. Therefore, it also contains pixels and shadow that violate the brightness constancy. However, the texture component represents a measure for characteristics such as smoothness, roughness and regularity. Thus, the performance of the optical flow can be improved by using the image with the texture component.
The separation structure component from intensity image is accomplished using the method of Rudin, Osher and Fatemi  that removes noise exploiting total variation. For the intensity image I(x), structure-texture separation is done by Eq. (3) and Eq. (4).
where I(x) is the intensity image, Is(x) is the structure component and θ is a constant. ∇Is represents the gradient variation of the structure component. The component that minimizes Eq. (3) is the solution of the structure component Is(x). IT(x) is the texture component and is calculated by finding the difference between the intensity image and its structure component as the expression of Eq. (4). Fig. 4 shows the image after it has been separated into the structure and texture components. In the texture image, it can be seen that neither the image’s characteristic nor its shadow component can be found.
Fig. 4.Extraction of texture component
3.2 Occlusion detection
We use the Lucas-Kanade’s optical flow method  which can estimate motion information between two images in order to determine the occlusion region. After investigating consistency of the motion information between left and right images, inconsistent pixels are defined as the occlusion region.
Optical flow assumes brightness constancy, but in the actual image, the brightness value of left and right image changes because of the camera sensor noise, the object’s respective reflectance, and the shadow. For these reasons, the performance of the optical flow is not good. To determine more accurate motion information, we apply the optical flow to the textural part of the image. As a result, the performance of the optical flow could be improved.
Fig. 5 compares the extraction results of the occlusion region depending on whether the texture component is applied or not. The occlusion region is usually located in one side of object. In Fig. 5(a) and 5(b), however, the detected occlusion region that did not use texture separation process exists anywhere indiscriminately. Fig. 5(c) and 5(d) represents the result based on texture separation. It is superior to that of the result that did not use texture separation process.
Fig. 5.Result depending on the use of texture image left image, texture nonuse; (b) right image, texture nonuse; (c) left image, texture use (d) right image, texture use
The addition of the occlusion region found using the above method and the detected error region of the disparity map is defined as the new error region.
3.3 Disparity map refinement
In this paper, the error regions are rectified by using a joint bilateral filter which fills the holes while preserving the boundaries of the reference image. The joint bilateral filter is defined as in Eq. (5) and Eq. (6).
where D is the depth image and I is the intensity image and D’p represents the pixel value generated by applying a joint bilateral filter to D and I. G is the Gaussian function and ||p - q|| is the Euclidean distance between p and q. s is a set of neighboring pixels of p. σs and σr are parameters defining the size of neighborhood and Wp is the normalization constant. Fig. 6(a) is the disparity map obtained by using the stereo matching. Because of the disparity value error from the occlusion area and the boundary region, the image has blurring phenomenon or unclear shape. Fig. 6(b) is the rectified disparity map using the proposed method. This image shows that noise and the error region near the object boundaries are rectified
Fig. 6.Disparity map refinement: (a) before processing and (b) after processing
4. Experimental Results
To evaluate the performance of the proposed algorithm, we used “samgye” and “gyebeck”(MBC Drama) sequences with a size of 1920x1080 as test sequences. In order to detect the occlusion region, we used Lucas-Kanade method, which is a dense optical flow algorithm, and window size is 5x5. A virtual-view image is simply synthesized by applying the bidirectional linear interpolation. The θ value when separating the texture component was set to 0.125 based on the experimental results.
Fig. 7 shows the 1st, 3rd, 5th, 7th, 9th, and 11th view images out of the eleven virtual- views synthesized by using the bidirectional linear interpolation with “Samgye” as the test sequence.
Fig. 7.Virtual view-point image synthesized by using bidirectional linear interpolation: (a) (b) (c) (d) (e) (f) Results of 1st, 3rd, 5th, 7th, 9th, 11th view-points
In Fig. 8, the generated virtual views by four different algorithms are shown. We compared the performance of the proposed algorithm with before processing , error region + JBF algorithm  and occlusion region + JBF algorithm.
Fig. 8.Performance comparison(image quality) : (a) before processing; (b) error region + JBF; (c) occlusion region + JBF; (d) the proposed method
As shown in Fig. 8(a), 8(b) and 8(c), especially near the cockscomb area, the quality of the virtual-view images synthesized by using the other algorithms are poor. In Fig. 8(d), it shows that the cockscomb looks more natural after the proposed algorithm has been applied. It can also show that the quality of the virtual view image improves when using the proposed algorithm by comparing Fig. 9(d) and others.
Fig. 9.Performance comparison(image quality): (a) before processing; (b) error region + JBF; (c) occlusion region + JBF (d) the proposed method
Fig. 10 shows the 1st, 3rd, 5th, 7th, 9th, and 11th viewpoint images out of the eleven synthesized virtual views synthesized by using bidirectional linear interpolation with MBC drama “Gyebeck” as the test sequence.
Fig. 10.Virtual view-point image synthesized by using bidirectional linear interpolation. (a) (b) (c) (d) (e) (f) Results of 1st, 3rd, 5th, 7th, 9th, 11th view-points
Fig. 11 shows the comparison of results of specific regions in order to test the performance of the proposed algorithm. When the existing disparity map is used without refinement, it is shown from the area near the people’s ears in Fig. 11(a) that distortion exists because of the error from the disparity map. When the proposed algorithm is used, it can be shown from Fig. 11(d) and 12(d) that the quality of the virtual-view image improves.
Fig. 11.Performance comparison(image quality): (a) before processing; (b) error region + JBF; (c) occlusion region + JBF (d) the proposed method
Fig. 12.Performance comparison(image quality): (a) before processing; (b) error region + JBF; (c) occlusion region + JBF; (d) the proposed method
Table 1 shows the average PSNR of each method on Middlebury sequences (Tsukuba, Venus, Teddy and Cones) . The PNSR of the proposed method is greater than other methods. It is shown that the disparity map is greatly refined by the proposed algorithm.
Table1.PSNR of each method on Middlebury sequence
In this paper, we proposed a virtual-view synthesis algorithm using disparity refinement in order to improve the quality of the synthesized image. The disparity map obtained by using the stereo matching contains lots of noise and error regions. Those regions usually exist near the object boundaries and cause the quality of image to deteriorate when the virtual-view image is generated.
In the proposed algorithm, the error region is detected by investigating the consistency between the left and right disparity maps. Also, the texture component of the image that represents the structural characteristics is separated from image. Then, the optical flow algorithm is applied to the obtained texture component in order to extract motion information with high accuracy. After investigating consistency of the motion information between left and right images, inconsistent pixels are defined as the occlusion region. The error region is combined with the occlusion region to define a new region. Then, the joint bilateral filter is applied to this new region in order to acquire the appropriate disparity value. Finally, the virtual-view image is generated by applying bidirectional linear interpolation to the refined disparity map. Experimental results show that the quality of the virtual-view images using the proposed algorithm enhanced.