DOI QR코드

DOI QR Code

Quality Assessment of Images Projected Using Multiple Projectors

  • Kakli, Muhammad Umer (SEECS, National University of Sciences and Technology (NUST)) ;
  • Qureshi, Hassaan Saadat (SEECS, National University of Sciences and Technology (NUST)) ;
  • Khan, Muhammad Murtaza (SEECS, National University of Sciences and Technology (NUST)) ;
  • Hafiz, Rehan (SEECS, National University of Sciences and Technology (NUST)) ;
  • Cho, Yongju (Electronics and Telecommunications Research Institute (ETRI), Daejon) ;
  • Park, Unsang (Dept. of Computer Science and Engineering, Sogang University)
  • 투고 : 2014.12.18
  • 심사 : 2015.05.16
  • 발행 : 2015.06.30

초록

Multiple projectors with partially overlapping regions can be used to project a seamless image on a large projection surface. With the advent of high-resolution photography, such systems are gaining popularity. Experts set up such projection systems by subjectively identifying the types of errors induced by the system in the projected images and rectifying them by optimizing (correcting) the parameters associated with the system. This requires substantial time and effort, thus making it difficult to set up such systems. Moreover, comparing the performance of different multi-projector display (MPD) systems becomes difficult because of the subjective nature of evaluation. In this work, we present a framework to quantitatively determine the quality of an MPD system and any image projected using such a system. We have divided the quality assessment into geometric and photometric qualities. For geometric quality assessment, we use Feature Similarity Index (FSIM) and distance-based Scale Invariant Feature Transform (SIFT). For photometric quality assessment, we propose to use a measure incorporating Spectral Angle Mapper (SAM), Intensity Magnitude Ratio (IMR) and Perceptual Color Difference (ΔE). We have tested the proposed framework and demonstrated that it provides an acceptable method for both quantitative evaluation of MPD systems and estimation of the perceptual quality of any image projected by them.

키워드

1. Introduction

Multi-projector display (MPD) systems can be used to generate high-resolution displays or project wide field of view (FOV) images, thus finding application in live event projections such as sports, concerts or ceremonies. If multiple overlapping narrow FOV images of the same scenic location are provided as inputs to an MPD system, then it is desired that the resultant image is a seamless, large FOV panoramic image. This is different from tiled displays in which the images are projected side by side [1], resulting in a combined image with visible seam line. To maintain seamlessness in MPD systems, a small region of each projector is made to project overlapping (common) information. Making use of geometric correction, blending and color matching, the contents in the overlapping region are aligned and blended so that the image appears to be continuous (seamless) [2].

Geometric correction of content projected by each projector is important because the images may be projected on a planar or non-planar surface. Geometric correction typically modifies the content so that it appears ortho-rectified when it is projected [2,3]. It also ensures that the image content is adjusted in such a way that structures in the two projected overlapping regions are aligned. This will help the blending functions as they assume that the images are pre-aligned. Geometric correction methods based on feature matching, such as Scale Invariant Feature Transform (SIFT) [4], Speeded Up Robust Feature (SURF) [5] or non-parametric approaches discussed in [3], may produce results with slight errors. The results of geometric correction or alignment of multiple images may also vary, depending upon the type of image used for calibration and the values of system parameters selected during the calibration process. Two projected images from the same MPD system are shown in Fig. 1. The projection results differ because they were obtained using different geometric correction parameters i.e. different orders of Bézier curves [7]. Qualitatively, the image in Fig. 1(a) has less geometric error than that in Fig. 1(b).

Fig. 1.Images projected using a two-projector display system. In (a), the transition region from the left to the right projector is seamless. In (b), the transition region is visible (marked in red) because of incorrect geometric alignment.

Photometric corrections are applied to the projected images to make their intensities and colors similar. A sharp change in color or intensity from one projector to another will degenerate visual seamlessness. Similar to the case of geometric correction, different color correction and blending schemes [2,3] may produce different results. Thus, it is important to identify how accurately both colors and intensities are blended in the overlapping regions.

Multi-projector systems generally consist of a calibration phase and a runtime phase. In calibration phase the values of geometric and photometric correction parameters, necessary for projecting a seamless image, are determined. In the runtime phase, those parameters are used to perform geometric and photometric correction of the projected content [6]. In recent years, most of the research work has focused on improving geometric and photometric correction schemes [1,2,3,5,7]. In literature, evaluation of quality of the proposed scheme is conducted either subjectively or by using a single quantitative measure. For example, in [3], the authors performed only a subjective assessment of geometric or photometric quality. The problem of subjective quality evaluation is that even if performed by experts, it will vary from person to person. It is also difficult to use subjective measures for comparative analysis of two or more systems because they are neither quantitative nor evaluator-independent. In [1], the authors used reduction in dynamic range (standard deviation of intensity) and chrominance (percentage of error in solid color) as the measure of photometric quality. In [8], the authors have proposed to calculate ΔE-based difference in photometric quality between the projected image and the original image (i.e., the image to be projected). However, this technique assumes that the images are perfectly aligned and hence does not consider any adversity in photometric quality due to geometric misalignment. The shortcomings of existing quality assessment measures have been highlighted in Fig. 3 and Table 1 suggesting that a single measure is not sufficient for assessing geometric or photometric quality. Using multiple measures raises the issue of having quality indices with different units and scales. Thus, it is not possible to combine together different geometric or photometric quality assessment measures. To the best of our knowledge, no quality assessment measure exists which can determine geometric and photometric quality independently and then be able to combine the two together into a single joint quality assessment index.

Table 1.Quantitative geometric quality measures (special cases)

The main contributions of this paper are twofold. First, we propose a framework for separately assessing geometric and photometric quality of images projected using an MPD system. The proposed framework uses a Structured Light Pattern (SLP) to determine the quality of an MPD system. Therefore, it is possible to directly compare the performance of two different MPD systems. For quantitative quality evaluation, we propose to use Feature Similarity Index (FSIM) [9], Scale Invariant Feature Transform (SIFT) [4], Spectral Angle Mapper (SAM) [10], Intensity Magnitude Ratio (IMR) [11] and ΔE-based Perceptual Color Difference [12]. Second, the same framework may also be used with generic projected content to assess perceptual quality. Perceptual quality changes with respect to content of the image being projected. If two images are projected using same MPD system with one having structural information in the overlapping region and the other having a smooth uniform surface in the overlapping region. The structural misalignment in projector calibration would be more prominent when overlapping region has structure in it.

The rest of the paper is organized as follows: Section 2 presents an introduction to quality assessment and the proposed framework for quality assessment of projected images. Section 3 presents the experimental results. Section 4 concludes the paper.

 

2. Proposed Framework for Quality Assessment

Quality assessment of images is an important tool in image processing. Quality assessment can be qualitative or quantitative. Qualitative quality assessment may lead to different assessment results based upon the expertise of the user. Therefore, it is desired to eliminate the subjective nature of evaluation by using a quantitative evaluation that is independent of the evaluator. As defined in [13] by Moorthy et al., quantitative quality assessment methods are generally divided into three categories: 1) Quality Assessment using Full Reference, 2) Quality Assessment Using Reduced Reference and 3) Quality Assessment Using No Reference. The availability of a reference image plays a major role in categorizing the image quality assessment frameworks in one of the three above-defined categories. Solh et al. [14], Zou et al. [15] and Qureshi et al. [11] proposed to measure the quality of stitched images or mosaics using different frameworks. These methods determine the quality of stitched images, where both the inputs and outputs are conventional images (stored in a computer). The problem of assessing the quality of a projected image using multiple projectors is different from the conventional image quality assessment because the input and output are in different domains (i.e., saved image domain vs. projected image domain). Thus, a new framework has to be developed for assessing the quality of images projected by an MPD system.

As described in the previous sections, our interest is to determine the quality of an MPD system and of any projected image from such a system. It is assumed that for both cases the goal of projection is to obtain a single seamless image. For discussion of the proposed method and results, we assume that only two projectors are projecting the projected image. The proposed framework consists of capturing the following images: 1) Geometrically and photometrically corrected (intensity and color corrected but no blending mask applied) image projected from the left (or right) projector while keeping the other projector off. This captured image will be called the left projector image (lpi) (or right projector image (rpi)) for future reference. 2) Geometrically and photometrically corrected image generated using both projectors (intensity and color corrected with blending applied to make the overlapping region seamless). This image will be referred to as the complete projector image (cpi).

It is assumed that the images captured using a camera contain both the region of projection and its surrounding dark region. The surrounding dark region is removed by projecting and capturing a white image as the area of white image represents target area of projection. Next, the high- and low-frequency information from the cropped images is separated using À-trous Wavelet [16]. The low-frequency component and the captured image (containing high and low frequencies) are used to calculate geometric and photometric quality as shown in Fig. 2.

Fig. 2.Block diagram of proposed framework. Solid lines indicate that FSIM, NDb_SIFT, NSAM, IMR and ΔE measures will be calculated between overlapping regions of left and right projector images. Dotted lines indicate that the measures are calculated between the overlapping regions of the left (or right) and complete image.

2.1. Geometric Quality

A cpi appears seamless if the structure in the overlapping regions of lpi and rpi is geometrically aligned (i.e., overlaps each other exactly). Structure can be represented by the high-frequency information (i.e., edges in the images), as the low-frequency information generally represents homogeneous regions. Hence, to find the similarity in the structure of two images, the quantitative measure should compare the high-frequency information in them. A measure called Feature Similarity Index (FSIM), recently proposed by Zhang et al. in [9], can be used for this purpose. Similarly, HPF_SSIM [11] index can also be used to determine structural similarity between two images. Although not presented in this paper because of lack of space, our experiments demonstrated that the results of quality assessment are similar when using FSIM or HPF_SSIM for assessing geometric quality.

Assuming that the captured left projector image is represented as L, the captured right projector image is represented as R and the captured complete image is represented as C, their overlapping regions are represented as Lol, Rol and Col, respectively. The captured images are used for assessing geometric quality, whereas the low-frequency information, obtained by using À-trous Wavelet Filter [16], is used for estimating photometric quality.

FSIM is a structural similarity measure based on the Human Visual System (HVS) [9]. It uses Phase Congruency (PC) and Gradient Magnitude (GM) maps (high-frequency information of images) for determining the similarity between two images because the HVS is more sensitive to these features. As defined in [9], PC and GM maps can be computed as:

where PCa, PCb, GMa and GMb represent the PC and GM values of images a and b at pixel (x,y). T1 and T2 are small constants to avoid instability. Using PC and GM, FSIM is calculated as follows:

The ideal value of FSIM is 1 and the worst value is 0. For better readability, it can be represented on a scale from 100 to 0. In the proposed framework, three values of FSIM are calculated: 1) FSIM(Lol,Rol) between the overlapping region of lpi and rpi, 2) FSIM(Lol,Col) between the overlapping region of lpi and cpi and 3) FSIM(Lol,Col) between the overlapping region of rpi and cpi. FSIM(Lol,Rol) determines if the lpi and rpi are aligned in the overlapping region. If they are well aligned, the FSIM value will be near 100. The average of the latter two values, AVG((FSIM(Lol,Col),(FSIM(Rol,Col)), is used to measure any structural distortion in the projected image due to blending.

FSIM is a gradient-based similarity measure and is not completely invariant to large changes in intensity or color. This is evident from Fig. 3(a) and result of FSIM presented in Table 1(i), where a large photometric distortion results in an FSIM value of 93% even when the images are geometrically aligned. Therefore, we propose to use a feature-based structure similarity measure along with FSIM based on feature-based similarity index called Scale Invariant Feature Transform (SIFT). SIFT is selected because it is one of the best feature detectors among state-of-the-art feature detectors [20]. The idea is to extract features from the overlapping regions of the geometrically corrected images and calculate a distance-based measure between them. A variance-based statistical analysis, as proposed in [17], is then performed to remove the outliers (wrongly matched SIFT features). The distance for all the features is averaged to get a value named Db_SIFT (distance based on SIFT). The distance is measured between the features extracted from the overlapping regions of the left and right projector images resulting in Db_SIFT(Lol,Rol) and the overlapping regions of the left, right and complete projector images resulting in AVG(Db_SIFT(Lol,Col), Db_SIFT(Rol,Col)). For ideal case, i.e., perfect alignment or minimum structural error, the features in the overlapping region of the two images will overlap exactly, and hence the ideal value of Db_SIFT = 0. We propose to use the normalized value of Db_SIFT, NDb_SIFT, which has the value range of [0,1] as follows:

Where the scaling factor 0.1 was chosen empirically. In an ideal case, SIFT features of both images should be at the same location, hence the distance between them is minimal, i.e., Db_SIFT = 0 and NDb_SIFT = 1. As error increases, the location of SIFT features also changes in the images; therefore, the value of Db_SIFT increases and that of NDb_SIFT decreases.

Fig. 3.(a) FSIM results are poor for images with large intensity difference. (b) SIFT fails to match features for perfectly aligned overlapping images. Cyan lines show incorrect feature matching

To obtain an overall measure of geometric quality, we propose to combine FSIM with NDb_SIFT to yield the overall geometric quality (GQ) as follows:

For testing, we have averaged the results obtained from all the above measures, i.e., α=β=φ=ψ=0.25, as a product would make the result sensitive to any single measure.

It is interesting to observe that the SIFT-based measure alone will not be effective in calculating geometric distortion. SIFT may fail if a repetitive pattern, as shown in Fig. 3(b), is present in the image. Results of Table 1(i) indicate that even when two grid patterns are perfectly aligned, Db_SIFT returns a high distance value, i.e., indicating mismatch, whereas FSIM determines that the grid pattern images are perfectly aligned.

2.2. Photometric Quality

Similar to geometric distortions, variations in intensity or color of two adjacent projectors may result in a visible seam in the projected image. Photometric correction is responsible for making the colors and intensities of the two projectors similar. Therefore, we propose to measure the photometric quality of the projected image by comparing the similarity in intensity and color values by using the low-frequency information of the overlapping region of the projected images. Please note that FSIM employs gradient and phase congruency maps, i.e., high-frequency information of an image, to determine geometric quality. Hence, complementary information will be used to assess geometric and photometric quality. FSIMc, proposed in [9], takes into consideration the color information while calculating structural similarity. However, it does not separately provide an estimate of color and intensity similarity. Hence, to determine the similarity in the intensity of the two images, we use Intensity Magnitude Ratio (IMR), proposed by Qureshi et al. in [11]. IMR has an ideal value of 100, and the value decreases as intensity difference among images increases. IMR can be calculated using the following formula:

To determine any errors in the blend mask, intensity variation between the lpi and cpi and between the rpi and cpi is determined while using IMR. Please note that the subscript lf represents low-frequency.

To determine color similarity between two images, we propose to use normalized spectral angle mapper (NSAM). SAM has been successfully used in the literature for determining spectral similarity of satellite images [10]. SAM determines the angle between RGB components of images and is invariant to their magnitude (i.e., intensity). We have modified SAM and proposed NSAM, which is calculated between the corresponding pixels in the overlapping region of two geometrically and photometrically corrected images, as:

As defined by Wald in [10], lower values (2 or lower) of SAM represent good spectral fidelity, with 0 being an ideal value. NSAM = 1 / (1 + 0.1(0)) = 1 when there is no error and NSAM = 1 / (1 + 0.1(10)) = 0.5 when there is error (e.g., SAM=10). As with IMR, the following NSAM values are calculated: NSAM(Lol_lf, Rol_lf)and AVG(NSAM(Lol_lf,Col_lf), NSAM(Rol_lf,Col_lf)), NSAM(Rol_lf,Col_lf)). Since SAM measures an angle it may be normalized by 90. However, normalizing it with 90 will make the interpretation of the NSAM difficult. Therefore, we used the normalization as shown in Eq. (6). NSAM and IMR are pixel-based measures and are affected by geometric misalignment. Hence, they are calculated by dividing images into blocks of size 32×32 and are evaluated only for those blocks that have FSIM values greater than 95%.

NSAM and IMR simply determine the difference between color and intensity between two images, respectively. Therefore, to determine perceptual difference of color we have used ΔE in the CIEDE2000 Color space [18]. Please note that ΔE represents a quantitative value for “Just Noticeable Difference” that is affected by both the difference in intensity (ΔL) and difference in color (ΔC) [18]. Greater values of ΔE indicate larger error or difference in photometric similarity between two images. To identify the minimum and maximum values, we tested different color combinations and identified that the comparison of a white and a black image resulted in ΔE = 100. For all other cases, i.e., white with green or white with red, or by comparing green with red, ΔE<100. Thus, we normalized ΔE by 100. NΔE = (100-ΔE)/100. Finally, the combined photometric quality (PQ) index becomes:

where γ=δ=ϕ=ω=λ=0.2, because both color and intensity error detection is important for photometric quality assessment.

Please note that the proposed geometric and photometric quality assessment measures represent quality on a scale of 0 to 100. Thus they can be combined into a single quality assessment measure by simple averaging. This is not possible for existing quality assessment measures as they have different scales and units i.e. SSIM returns a value between 0 to 1, SAM returns an angle and SIFT returns a distance measure in pixels.

 

3. Experimental Results and Analysis

Testing of the proposed quality assessment framework has been performed on high-resolution (2048×768) images. Each image is split up into two parts by the projection system before displaying. Two types of images were used during testing: 1) a structured light pattern and 2) a group of natural images. The structured light pattern can be used to assess the geometric and photometric quality of a projection system, and the natural images are used as test sample space to determine the perceptual quality of a projected image. The 20 natural images used in the experiments are shown in Fig. 4. The experimental setup comprised of two projectors placed adjacent to each other having approximately 10% overlap. The projectors were projecting on a planar white wall. The camera used for capturing the projected images was an EOS 450D (Rebel XSI) and was capable of seeing both the projected images completely (without movement) and was at the center with respect to the projection, as shown in Fig. 5. The captured RAW mode images were converted to linear TIFF format using Canon’s conversion utility. Testing was conducted in a dark environment without ambient light and camera settings remained unchanged while testing.

Fig. 4.The images of test data set (general scenic images).

Fig. 5.Process of image acquisition and calculation of the proposed quality assessment indexes.

3.1. Quality Assessment of Projection System

The proposed framework can be used to determine the quality of a projection system by projecting a uniquely coded Structured Light Pattern (SLP) defined in [19]. The uniqueness of the coded pattern allows FSIM and NDb_SIFT to compare the same features with each other and thus result in an appropriate assessment of geometric quality. We have added colored bands in the background of the SLP so that NSAM, IMR and NΔE can be used to determine similarity in colors and intensity between the projected images.

For testing the quality of an MPD system, four cases are considered: 1) projected image containing minimum geometric and photometric error (Ref), 2) projected image containing only geometric error (GD), 3) projected image containing only photometric error (PD) and 4) projected image containing both geometric and photometric errors (GPD). Geometric errors were introduced by randomly moving one projector with respect to the other, while photometric errors were introduced by changing the colors of one projector with respect to the other (using the projector’s color settings and temperature). The resultant images captured using a camera are shown in Fig. 6. In another set of experiments with the SLP, we modified the projection system to add a horizontal shift of 2 and 4 pixels in the projected image. The values of FSIM varied from Ref value by approximately 15% for a shift of 2 pixels and 29% for a shift of 4 pixels. The NDb_SIFT values varied from 85.58 (Ref) to 67.80 (2-pixel shift) and 53.58 (4-pixel shift), i.e., a variation of 18% and 32%, thus indicating that a large shift causes a significant decrease in the quantitative measures.

Fig. 6.Projected and Captured Structured Light Pattern: (a) Ref image; (b) represents image with GD, (c) represents image with PD; (d) represents image with both GD and PD (GPD).

Returning to the results for SLP, Table 2 clearly shows that FSIM value decreases from 88.37 (Ref) to 56.41 (GD), a difference of 32%, when geometric distortion is introduced between the left and right projected images. The measure remains nearly unaffected when only photometric distortion is added, as the Ref value of 88.37 changes to 87.01 with a difference of 1%. Finally, when both geometric and photometric distortions are added to the projection system, the FSIM index registers a sharp decrease from 88.37 (Ref) to 57.77 (GPD). This indicates that FSIM is capable of identifying geometric distortions while being invariant to photometric distortions.

Table 2.FSIM, NDb_SIFT, NSAM, IMR and NΔE values for the SLP images of Fig. 6 containing (a) minimum geometric and photometric errors (Ref.), (b) Geometric Distortion (GD), (c) Photometric Distortion (PD) and (d) both distortions (GPD).

Continuing with the images of Fig. 6, NDb_SIFT values decrease from 85.73 (Ref) to 59.04 and 54.54 for the GD and GPD cases, respectively, correctly detecting degradation in geometric alignment. However, when only photometric distortion is added, the value changes from 85.73 to 84.06, a relatively small change, and hence indicates no significant geometric distortion. The overall Geometric Quality (GQ) measure remains approximately constant when PD is added and degrades when GD is added to the projection system.

For assessing photometric distortion (PD), we have used NSAM, IMR and NΔE measures. From Table 2, the value of the perceptual NΔE measure remains approximately constant when geometric distortion (GD) is added to the projected image, i.e., the difference between Ref (96.29) and GD (93.86) values remains close to the Just Noticeable Difference values given in the literature (i.e., 2.3 in [12] and 3 in [20]). However, when photometric distortion is added in PD and GPD cases, the index decreases to 87.83 and 87.36 from 96.29, respectively. This indicates that the measure is able to determine degradation in photometric quality. Similarly, IMR values remain approximately similar to the Ref case when GD is added to the projected image. However, when PD is added, the value of IMR drops from 89.17 to 75.07, indicating intensity change. Similarly, NSAM values drop from 82.21 (Ref) to 50.16 (PD) and 52.54 (GPD) when photometric distortion is introduced between the two projectors. The advantage of using an SLP is that it can be used along with the proposed framework for comparing geometric and photometric quality of any MPD system and the indices obtained can be used for a direct comparison.

3.2. Perceptual Quality Assessment of Projected Images

Geometric errors in an MPD system may occur because of incorrect alignment. However, the visibility of these geometric errors to a human observer is dependent upon the structure present in the overlapping region of the projected image. For example, the images of Fig. 7 have been projected by a projection system with the same amount of geometric error. However, error is clearly visible in Fig. 8(c, d) as compared to Fig. 8(a, b). Thus, perceptual quality of projected images is also an interesting problem. To determine the perceptual quantitative quality of the projected images, the test images of Fig. 4 were projected for four scenarios, i.e., Ref, GD, PD and GPD, and selected results are shown in Figs. 7, 8, 9 and 10, respectively. Their corresponding quality indices are presented in Tables 3 and 4.

Fig. 7.Overlapping region of projected image with minimum geometric and photometric error, (Ref) column of Tables 3-A and 3-B.

Fig. 8.Overlapping region of projected image with geometric error, (GD) column of Tables 3-A and 3-B.

Fig. 9.Overlapping region of projected image with photometric error, (PD) column of Tables 3-A and 3-B.

Fig. 10.Overlapping region of projected image with geometric and photometric error, (GPD) column of Tables 3-A and 3-B.

Table 3-A.Quantitative geometric quality assessment for the images presented in Figs. 7, 8, 9 and 10. FSIM, NDb_SIFT, Avg(FSIM(L,C), FSIM(R,C)) and GQ measures are compared between Ref, GD, PD and GPD columns.

Table 3-B.Quantitative photometric quality assessment for images presented in Figs. 7, 8, 9 and 10. Compare values of NSAM, IMR, NΔE and PQ measures between Ref, GD, PD and GPD columns.

Table 4.Quantitative and qualitative results of geometric and photometric quality for the test dataset of 20 images for (i) Ref., (ii) GD, (iii) PD and (iv) GPD cases. The trend highlights that both FSIM and NDb_SIFT are capable of estimating geometric error while being invariant to photometric distortions. (a) FSIM (b) NDb_SIFT (c) GQ (d) NSAM (e) NΔE (f) PQ (g) user evaluation of qualitative quality as compared to GQ measure (h) user evaluation of qualitative quality as compared to PQ measure (i) Comparison of proposed measure with Structural Similarity (SSIM) Index (j) Comparison of proposed measure with Mean Squared Error (MSE).

Table 3-A presents the results of FSIM and NDb_SIFT for geometric quality assessment of the images presented in Figs. 7, 8, 9 and 10. Please note that due to shortcomings of the geometric [7] and photometric correction schemes used during calibration of the MPD system, the quantitative values obtained for Ref case are not ideal. It is clear from the results of FSIM and NDb_SIFT that whenever geometric distortion is introduced in the projection system, the value of the index decreases from its reference value. For images of Fig. 4(b, l), containing homogeneous area in the overlapping region, the value of FSIM decreases by approximately 18% and 12%, i.e., from 97.42 (Ref) to 79.65 (GD) and from 95.57 (Ref) to 83.11 (GD). However, for images containing more structure in the overlapping region such as those in Fig. 4(j, n, q, i), the value of FSIM decreases by 29%, 27%, 28% and 20%, respectively. The degradation of FSIM value is different based upon the amount of structure present in the image even though the parameters of the projection system and introduced geometric errors are the same. In the case of NDb_SIFT, this difference is between 49% and 46%. Thus the measures correctly identify when geometric distortion appears in the projected image. The overlapping regions of Fig. 8(c, d, e, f), marked in red, appear to be blurred, thus indicating misalignment of structure. Similar degradation in geometric quality can be observed by the quantitative values of FSIM for the images in Fig. 10, which contain both geometric and photometric distortions (GPD). Comparing Ref and PD columns, it is evident that change in FSIM value is less than 1%, thus indicating no geometric distortion when PD is added to the projection system. The images shown in Fig. 9 were captured when PD was introduced into the system. NDb_SIFT also does not change significantly between the Ref and PD cases, again indicating invariance of NDb_SIFT to photometric distortions, an average difference of less than 1% (87.07 compared to 87.49). For the complete data set of 20 images, only Fig. 4(b) exhibited a large difference of approximately 4% for NDb_SIFT values between Ref and PD images. It was observed that for Fig. 4(b), SIFT matched only three features between the two projected images and hence results obtained are unreliable. This can be rectified by using SIFT results only when number of matched features is a significant percentage of the total features.

The results of FSIM and NDb_SIFT indices are further consolidated by the fact that for the 20 images tested, the average difference between FSIM values of Ref and GD cases is approximately 23% and between NDb_SIFT values is 47%. Invariance of FSIM and NDb_SIFT to photometric distortions is clear from the fact that the difference in the average value of the Ref and PD column is less than 1% for both indices. The results for all 20 images are presented in Table 4. The graphs present image number on the x-axis and quantitative measure value on the y-axis. In Table 4(a), the graphs of Ref and PD cases, represented by blue and green lines, respectively, return approximately the same values of FSIM. This indicates that the measure is invariant to photometric distortions shown in Fig. 9. For both GD and GPD cases, we can see that the purple and red lines are far below the Ref line in blue, i.e., indicating lower values of FSIM for the geometric error (GD and GPD) cases. A similar trend can be observed for NDb_SIFT measure. As GQ is an average of FSIM and NDb_SIFT measures, it is in trend with them.

Consider the image of Fig. 4(l). For the Ref case, the value of NSAM is 86.58, whereas the values for GD and PD cases are 86.09 and 40.90, respectively, thus showing a difference of less than 1% when geometric distortion is added and a difference of 46% when photometric distortion is added. Similarly, for images in Fig.4(b, j, n, q), the NSAM differences between Ref and PD images are 37%, 47%, 46% and 53% as compared to the 13%, 0.5%, 0.6% and 0.4% difference between Ref and GD images. The large difference for the first result of GD may be caused by a large shift in overall structure of the image.

Difference between the average value of NSAM between Ref and PD images in Fig. 7 and Fig. 9 is 42% (82.62 – 40.33). However, the difference between Ref and GD images in Fig. 7 and Fig. 8 is approximately 2% (82.62 – 80.65), clearly indicating invariance of SAM to geometric distortions while being sensitive to photometric errors. This is also evident from the comparison graphs of NSAM values shown in Table 4(d). The graphs for Ref and GD images follow each other in close approximation, whereas the graphs of PD and GPD have a lower value for all the images with respect to the Ref images.

When GD is introduced in the projected images, the difference in the NΔE values between the Ref and GD images is less than 2%. However, when PD is introduced in the projected image, the difference in the NΔE value between the Ref and PD images of Fig. 7 and Fig. 9 increases to 8, 13, 8, 9, 14 and 6%, respectively. The average difference in NΔE value for the 20 images is less than 1% when Ref and GD images of Fig. 7 and Fig. 8 are compared. However, the average difference for images with PD is approximately 10%, clearly indicating that the photometric errors will be clearly visible to a human observer. This trend can be verified from the graph presented in Table 4(e), GD line has approximately same value of NΔE as Ref line but PD and GPD have smaller values.

Photometric Quality measure (PQ) is the average of NSAM, IMR, AVG(NSAM(L,C),NSAM(R,C)), AVG(IMR(L,C), IMR(R,C)) and NΔE measures. For images containing GD, the value of PQ is approximately similar to the values for Ref. images, thus indicating no significant change in photometric quality. However, for images containing photometric errors (PD and GPD columns), a significant decrease in PQ value is observed as compared to the Ref column.

3.3. Comparison of Quantitative Measures to Qualitative Survey

The International Telecommunication Union – Telecommunication Standardization Sector’s (ITU-T) standard for qualitative quality assessment suggests performing Degradation Category Rating (DCR) to assess the quality of any image. In this context we showed the images in Fig. 4 with and without geometric and photometric distortions to a group of 15 individuals who were not familiar with the details of the experiment. This quality assessment will be referred to as User Response. The users were asked to rate them on a scale of 1 to 5, with 1 being the worst quality and 5 being the best quality. The users were not asked to specify the type of error but were asked to rate the images based upon their perception of the quality of each image. The results obtained are presented in Table 4(g, h).

The graph of Table 4(g) presents the User Response for the 20 test images for Ref and GD cases. The first observation is that both the qualitative evaluation by users and the quantitative evaluation by the measures FSIM and NDb_SIFT rates the images with GD as having a poorer quality value when compared to Ref images. The general trend of the quantitative evaluation, blue and purple lines in Table 4(a) (representing Ref and GD cases) is similar to the qualitative evaluation trend of blue and red lines in Table 4(g).

The graphs of Table 4(d, e, h) present information related to rating of the 20 test images for Ref and PD cases using the User Response and quantitative rating obtained by IMR, NSAM and NΔE measures. The first observation is that for Table 4(h), the green curve representing photometric distortion case is below the blue line representing the Ref image. The second observation is that the green line representing photometric distortion follows a similar trend in Table 4(d, e). This indicates that the PQ measure matches the User Response in determining the severity of photometric error accurately. For the images of Fig. 4(n-g), the quantitative measure PQ does not exactly follow the trend of the qualitative user evaluation. However, the user evaluation suggests that the effect of photometric distortion on these images is similar, whereas the quantitative measures show the photometric distortion is increasing between these images. We believe this is because the users have evaluated the images independently (comparing two at a time or evaluating a single image as per the guidelines of the ITUT standard), and some differences may not have been significantly noticeable because the images were not in sequence.

3.4. Comparison of Proposed Measures with SSIM and MSE

Presented in Table 4(i,j) are the results of quality assessment obtained using Structural Similarity Index (SSIM) and Mean Square Error (MSE) measures. From the graph of Table 4(i) it is clear that SSIM is not invariant to photometric distortion. SSIM value decreases from the Ref case even when photometric distortion is present in the projection system. This can be observed by comparing the blue line with the green line. If SSIM was invariant the value should have remained unchanged and the lines closer to each other. In case of Mean Square Error (MSE) a lower value means less error and hence the reference represented by the blue line is at the bottom of the graph. MSE values for either geometric or photometric error result in a lower value, which is represented by red and green curves, respectively. The measure is less affected by geometric distortion as compared to photometric distortion. Since both measures register a significant decrease or increase in their value for either geometric or photometric distortion they cannot be used to identify the type of error in the projection system. However, the proposed geometric and photometric quality measures can be used to evaluate geometric and photometric errors individually and can also be combined together by using simple averaging.

3.5. Scalability of the Proposed Framework

To scale the proposed framework for a larger number of projectors, we calculated the geometric and photometric quality between each overlapping region and determined GQ and PQ using:

Where N is the number of overlapping regions in the projection system, and GQ and PQ are the geometric and photometric quality measure of the ith overlapping region. Fig. 11 shows some test images projected using a three-projector MPD system, comprising of two projectors placed side by side and a third one overlapping both projectors from the top. We tested our scheme for six different error scenarios. Please note that these different scenarios were created to demonstrate the effectiveness of the proposed framework for different types of errors.

Fig. 11.Projected images containing different combinations of photometric and geometric errors used for testing of three-projector MPD system.

Table 5 shows the results of GQ and PQ measures for the images in Fig. 11. Each result is obtained by averaging the results between the left and right projector, left and top projector and right and top projector. Results of Table 5 confirm that with the introduction of geometric error in the system, the value of proposed GQ measures decreases as compared to the Ref case. The proposed GQ measure shows less than 1% degradation in the value when column (a) is compared to columns (b) or (c), which report the results of GQ for images containing photometric distortions. Comparison of column (a) with column (d) indicates that geometric distortion in the projected image results in a degraded value of measure GQ and that this value decreases further if both the left and top projectors are misaligned geometrically (compare column (a) with (d) and then column (a) with (e)).

Table 5.Quantitative results for assessing quality of three-projector system for images in Fig. 11.

The second row of Table 5 presents the results of photometric quality assessment. The values of PQ indicate that when photometric distortions are added to the projected images the value of PQ generally decreases with respect to the value of the Ref image.

 

4. Conclusion

In this work we have proposed a framework for assessing both the geometric and the photometric quality of a multi-projector display system and of the images projected using it. Such systems are gaining popularity for providing interactive immersive environments for entertainment [22]. Geometric quality (GQ) was calculated as an average of Feature Similarity Index (FSIM) and a distance-based feature-matching measure (NDb_SIFT). It was observed that the GQ measure correctly registered a larger difference in its values when geometric distortion (GD) image results were compared to the Ref. image results. However, when photometric distortions (PD) were added to the system, the value of GQ did not change more than 1%, thus indicating invariance of the proposed geometric quality assessment measure to PD.

For the evaluation of photometric quality (PQ) of the projected images, we proposed the use of normalized Spectral Angle Mapper (NSAM), Intensity Magnitude Ratio (IMR) and NΔE measures to determine color, intensity and perceptual photometric variations, respectively. Based on the experimental evaluations, it was demonstrated that the proposed indices have meaningful utility and can be used to determine and improve the quality of a projection system. The values of PQ are approximately invariant to geometric distortions while being sensitive to photometric distortions in the projected images.

It was also demonstrated that the proposed framework could determine the quality of a projection system by using a structured light pattern. This allows quantitative comparison of the quality of two different multi-projector display systems. It was also demonstrated that the same framework could assess the perceptual quality of images projected from such systems, taking into account the content of the projected images.

참고문헌

  1. G. Wallace, H. Chen and K. Li, "Color Gamut Matching for Tiled Display Walls," in Proc. of International Immersive Projection Technologies Workshop, Eurographics workshop on Virtual Environments, pp. 1-10, 2003.
  2. B. Sajadi, M. Lazarov, M. Gopi and A. Majumder, “Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1317-1326, 2009. https://doi.org/10.1109/TVCG.2009.124
  3. M. Harville, B. Culbertson, I. Sobel, D. Gelb, A. Fitzhugh and D. Tanguay, "Practical Methods for Geometric and Photometric Correction of Titled Projector," in Proc. of Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06), pp. 5, 2006.
  4. D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  5. L. Juan and O. Gwun, "SURF applied in Panorama Image Stitching," in Proc. of Second International Conference on Image Processing Theory Tools and Applications, pp. 495-499, 2010.
  6. B. Sajadi and A. Majumder, “Auto Calibrating Tiled Projectors on Piecewise Smooth Vertically Extruded Surfaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 9, pp. 1209-1222, 2011. https://doi.org/10.1109/TVCG.2011.33
  7. Atif Ahmed, R. Hafiz, M. M. Khan, Y. Cho and J. Cha, “Geometric Correction for Uneven Quadric Projection Surfaces Using Recursive Subdivision of Bezier Patches,” ETRI Journal, vol. 35, no. 6, pp. 1115-1125, 2013. https://doi.org/10.4218/etrij.13.0112.0597
  8. B. Sajadi, M. Lazrov and A. Majumder, "ADICT: Accurate Direct and Inverse Color Transformation," in Proc. of European Conference on Computer Vision, pp. 72-86, 2010.
  9. L. Zhang, D. Zhang, M. Xuanqin and D. Zhang, “FSIM: A Feature Similarity Index for ImageQuality Assessment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, 2011. https://doi.org/10.1109/TIP.2011.2109730
  10. L. Wald, Data Fusion: Definitions and Architectures - Fusion of Images of Different Spatial Resolutions, ISBN 2-911762-38-X, ENSMP, 2002.
  11. H. S. Qureshi, M. M. Khan, R. Hafiz and Y. Cho, “Quantitative Quality Assessment of Stitched Panoramic Images,” IET Image Processing, vol. 6, no. 9, pp. 1348-1358, 2011. https://doi.org/10.1049/iet-ipr.2011.0641
  12. Fairchild and Mark D., Color and Image Appearance Models, Color Appearance Models, John Wiley and Sons, 2005.
  13. A. K. Moorthy and A. C. Bovik, “Visual Quality Assessment Algorithms: What does the future hold?,” Multimedia Tools and Applications Journal, vol. 51, no. 2, pp. 675-696, 2011. https://doi.org/10.1007/s11042-010-0640-x
  14. M. Solh and G. Alregib, “MIQM: A Multicamera Image Quality Measure,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 3902-3914, 2012. https://doi.org/10.1109/TIP.2012.2200490
  15. L. Zou, J. Chen and J. Zhang, “Assessment approach for image mosaicking algorithms,” Optical Engineering Letters, vol. 50, no. 11, pp. 1-3, 2011.
  16. J. L. Starck and F. Murtagh, “Image Restoration with Noise Suppression Using the Wavelet Transform,” Astronomy and Astrophysics, vol. 288, no. 1, pp. 342-350, 1994.
  17. M. Décombas, F. Dufaux, E. Renan, B. Pesquet-Popescu and F. Capman, "A New Object Based Quality Metric Based on SIFT and SSIM," in Proc. of IEEE Int'l Conference on Image Processing, Orlando, FL, USA, pp. 1493-1496, 2012.
  18. G. Sharma, W. Wu and E. N. Dalal, “The CIEDE2000 Color-Difference Formula: Implementation Notes, Supplementary Test Data, and Mathematical Observations,” Color Research and Applications, vol. 30, no. 1, 2005. https://doi.org/10.1002/col.20070
  19. T. J. Yang, Y. M. Tsai and L. G. Chen, "Smart Display: A Mobile Self-Adaptive Projector-Camera System," in Proc. of IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 2011.
  20. J. Heinly, E. Dunn and J. Frahm, "Comparative Evaluation of Binary Features," in Proc. of Computer Vision - ECCV 2012, Lecture Notes in Computer Science, pp. 759-773, 2012.
  21. Y. Yang, J. Ming and N. Yu, “Color Image Quality Assessment Based on CIEDE2000,” Advances in Multimedia, 2012. Doi:10.1155/2012/273723.
  22. J. M. Seok and Y. Lee, “Visual-Attention-Aware Progressive RoI Trick Mode Streaming in Interactive Panoramic Video Service,” ETRI Journal, vol. 36, no. 2, pp. 253-263, 2014. https://doi.org/10.4218/etrij.14.2113.0012