DOI QR코드

DOI QR Code

The Effects of Image Dehazing Methods Using Dehazing Contrast-Enhancement Filters on Image Compression

  • Wang, Liping (School of Mechanical, Electrical and Information Engineering, Shandong University) ;
  • Zhou, Xiao (School of Mechanical, Electrical and Information Engineering, Shandong University) ;
  • Wang, Chengyou (School of Mechanical, Electrical and Information Engineering, Shandong University) ;
  • Li, Weizhi (School of Mechanical, Electrical and Information Engineering, Shandong University)
  • Received : 2015.11.17
  • Accepted : 2016.06.11
  • Published : 2016.07.31

Abstract

To obtain well-dehazed images at the receiver while sustaining low bit rates in the transmission pipeline, this paper investigates the effects of image dehazing methods using dehazing contrast-enhancement filters on image compression for surveillance systems. At first, this paper proposes a novel image dehazing method by using a new method of calculating the transmission function—namely, the direct denoising method. Next, we deduce the dehazing effects of the direct denoising method and image dehazing method based on dark channel prior (DCP) on image compression in terms of ringing artifacts and blocking artifacts. It can be concluded that the direct denoising method performs better than the DCP method for decompressed (reconstructed) images. We also improve the direct denoising method to obtain more desirable dehazed images with higher contrast, using the saliency map as the guidance image to modify the transmission function. Finally, we adjust the parameters of dehazing contrast-enhancement filters to obtain a corresponding composite peak signal-to-noise ratio (CPSNR) and blind image quality assessment (BIQA) of the decompressed images. Experimental results show that different filters have different effects on image compression. Moreover, our proposed dehazing method can strike a balance between image dehazing and image compression.

Keywords

1. Introduction

Remote video surveillance systems are widely used in people's daily lives, such as in aerial imagery [1] and remote sensing [2]. Images of outdoor scenes often suffer from bad weather conditions such as haze, fog, and smoke. If a remote portion has a camera located in an environment where fog and haze are common (e.g., near an ocean), images taken by the camera usually lose contrast and fidelity, because light is absorbed and scattered by turbid media such as particles and water droplets in the atmosphere during the process of propagation. These images captured by a remote video surveillance system are transmitted to a location or a control station where images are either stored or used for real-time surveillance. If a foggy image were to be transmitted without any processing, the image quality of the terminal would be very poor. Thus, it is necessary to implement image dehazing for remote video surveillance systems. In addition, images must be compressed to accommodate low bandwidths or low-complexity coders. Therefore, it is significant to study the influence of image dehazing on image compression (image coding).

Dehazing is a process of removing haze from a hazy image and enhancing image contrast. Because concentrations of haze vary from place to place and it is difficult to detect in a hazy image, image hazing is a challenging task. Early researchers used conventional techniques of image processing to remove haze from a single image, such as histogram-based dehazing methods [3] [4]. However, dehazing effects are limited, because a single hazy image rarely contains sufficient information. Later, researchers attempted to improve dehazing performance with multiple images or additional information, such as polarization-based methods [5] [6] and methods that obtain multiple images of the same scene under different weather conditions [7] [8]. In [9] and [10], dehazing is conducted based on the given depth information. These algorithms can estimate scene depths and remove haze effectively but require multiple images or additional information, which limits their applications.

Recently, significant progress has been made in single-image dehazing based on a physical model. Tan [11] maximized the contrast of a hazy image, assuming that a haze-free image has a higher contrast ratio than a hazy image. However, the dehazed image tended to be overcompensated for the reduced contrast, yielding halo artifacts. Fattal [12] proposed removing haze from color images by independent component analysis, but this algorithm could fail in the presence of heavy haze. A novel haze removal algorithm based on dark channel prior (DCP) was proposed by He et al. [13]. The approach is simple and effective in most cases. However, noise in the sky could be amplified, and the algorithm is computationally intensive because a time-consuming soft matting is applied [14] to refine object depths. Some improved algorithms have been proposed to overcome the weakness of the approach of He et al. To promote efficiency, Yu et al. [15], Gibson et al. [16], and Xiao and Gan [17] replaced time-consuming soft matting with bilateral filtering [18], median filtering, and guided joint bilateral filtering, respectively. He [19] proposed guided image filtering to replace time-consuming soft matting, and Ancuti et al. [20] changed the block-based approach to one based on layer. To improve dehazing quality, Kratz and Nishino [21] and Nishino et al. [22] modeled the image with a factorial Markov random field to estimate scene radiance more accurately. Meng et al. [23] proposed an effective regularization dehazing method to restore a haze-free image by exploring the inherent boundary constraint. Tang et al. [24] combined four types of haze-relevant features with random forest and proposed a learning-based approach for robust dehazing. Li et al. [25] proposed a weighted guided image filtering (WGIF) method, and Li and Zheng [26] introduced a novel edge-preserving decomposition technique to estimate transmission utilizing the WGIF.

In recent years, in addition to researching single-image dehazing methods, some researchers have studied the effects of image dehazing on image compression. The most representative work is the research of Gibson et al. [16]. Gibson et al. first artificially added fog into a fog-free image and then used DCP [12] to conduct image dehazing before and after image compression. Finally, the composite peak signal-to-noise ratio (CPSNR) and blind image quality assessment (BIQA) were calculated with the help of the original haze-free image. The results show that better performance with fewer artifacts and better coding efficiency is achieved when dehazing is applied before compression. The aforementioned dehazing methods use many filtering algorithms, include minimum filtering, median filtering, bilateral filtering, guided joint bilateral filtering, guided image filtering and WGIF. However, Gibson et al. investigated dehazing (before or after image compression) effects only on image and video coding; they did not thoroughly discuss the effects of dehazing contrast-enhancement filters on image compression.

Along with the frequent occurrence of hazy weather, the means by which to carry out image dehazing and transmission more effectively require urgent attention. Filtering is a necessary tool for image dehazing, and the image must be compressed before transmission. Therefore, to design an outdoor monitoring system suitable for fog and haze, it is necessary to research the effects of dehazing contrast-enhancement filters on image compression.

In this paper, we will use the DCP image dehazing method from [13] and further investigate the effects of two different dehazing methods using dehazing contrast-enhancement filters on image compression based on the conclusion of Gibson et al. [16]. Our work is mainly on the following four aspects: first, we propose a new method for calculating the image transmission function. The single-image dehazing method using this proposed approach is called the direct denoising method. Second, we qualitatively analyze the effects of two different image dehazing methods (DCP method and the direct denoising method) on image dehazing and image compression. Third, four different dehazing contrast-enhancement filters are used to dehaze, and a large amount of simulation experiments are conducted to explore the effects of their parameters on image dehazing and image compression. Fourth, we improve the direct denoising method by using a saliency map as a guidance image to distinguish between the far scene and bright area in the close scene.

The rest of this paper is organized as follows. In Section 2, we will give an overview of the image dehazing model and image compression system. In Section 3, a new method for calculating the transmission function for single-image dehazing is presented—namely, the direct denoising method. The application of dehazing contrast-enhancement filters is also introduced. In Section 4, we investigate what happens to a hazy image when it is compressed with JPEG and explore the different compression artifacts when two different image dehazing methods are applied. In Section 5, we improve the direct denoising method by using a saliency map to distinguish between the far scene and bright objects of the close scene. Extensive experimental results are reported in Section 6. Finally, conclusions and remarks on possible further work are given in Section 7.

 

2. Background of Image Dehazing and Image Compression

2.1 Dehazing Model

Almost all popular single-image dehazing algorithms are based on the widely used atmospheric scattering model, which was provided by McCartney in 1976 [27] and further derived by Narasimhan and Nayar [7] [8]. It can be expressed as follows [11] [12]:

where J(x)=(Jr(x),Jg(x),Jb(x))T and I(x)=(Ir(x),Ig(x),Ib(x))T denote the original and observed r, g, b colors at pixel position x , respectively. A(x)=(Ar(x),Ag(x),Ab(x))T is the global atmospheric light that represents ambient light in the atmosphere. t(x)∈(0,1) is the transmission of reflected light, which is determined by the distance d(x) between the scene point and the camera, which is called scene depth and is expressed as follows:

where β is the scattering coefficient which is dependent on the size of the scattering particle.

For Eq. (1), J(x)t(x) is called the direct attenuation [11], which describes the scene radiance and its decay in the medium; A[1–t(x)] is called the airlight (atmospheric veiling) [11], and it results from previous scattered light and leads to a shift in scene color. Because I(x) is known, the goal of dehazing is to estimate A and t , and then J(x) is restored according to Eq. (1). In the following analysis of this paper, we assume that scattering is homogeneous, which restricts β to be spatially invariant. Thus, it is worth noting that scene depth d(x) is the most important parameter for image dehazing.

2.2 Image Compression Framework

In this paper, we will explore how compression artifacts are affected when two different image dehazing methods using dehazing contrast-enhancement filters are employed before compression. The frame of the image compression standard used here is JPEG [28], which contains three basic steps: DCT, DCT coefficient quantization, and Huffman entropy encoding. The decoding process is inverse process of encoding. The basic JPEG encoding and decoding processes are presented in Fig. 1.

Fig. 1.Block diagram for JPEG compression.

The process of encoding is shown as follows:

Step 1. The input image is first converted into the YCrCb color space and then is grouped into blocks of size 8×8.

Step 2. Before carrying out a blocked discrete cosine transform (BDCT), the input image data are shifted from unsigned integers to signed integers. For (P+1)-bit input precision, the shift is achieved by subtracting 2P.

Step 3. Each block is transformed by BDCT; i.e.,

Each block will include 64 DCT coefficients composed of one DC coefficient and 63 AC coefficients.

Step 4. Quantize each matrix block; i.e.,

where Fuv and Fquv are DCT coefficients before and after quantization, respectively. Quv is the quantization table. Quantization can cause a loss of image energy, which is mainly high-frequency energy, whereas the energy of the image is mainly concentrated in low frequencies. Thus, as long as we select an appropriate quantization coefficient, human eyes can hardly detect distortion in the decoded image.

Step 5. After quantization, the DC coefficient of each block is coded in a differential pulse code modulation (DPCM). The 63 coefficients are converted into a 1-D zig-zag sequence, preparing for entropy encoding.

 

3. Dehazing Methods Using Dehazing Contrast-Enhancement Filters

3.1 Image Dehazing Algorithm Based on Dark Channel Prior

He et al. [13] discovered DCP based on the observation of outdoor haze-free images: in most non-sky patches, at least one color channel has some pixels whose intensities are very low and close to zero. He et al. defined these pixels as dark channel; i.e.,

where c∈{r, g, b} is the color channel index, Jc(x) is a color channel of J, and Ω(x) is a local patch centered at x . According to DCP, Jdark(x)→0. It can be derived from the haze image model in Eq. (1) that

where atmospheric light A can be estimated according to the intensity of foggy region pixels.

Assuming that transmission t(x) is smooth, transmission t(x) is expressed as A morphological multiscale operator is applied to both sides of Eq. (7):

Atmospheric light Ac is always positive. According to the characteristics of the dark channel whose value of the fog-free image approximately equals 0, it can be derived that

With Eq. (8) and Eq. (9), we can derive the transmission function i.e.,

In practice, even on clear days, the atmosphere is not absolutely free of any particles. Haze still exists when we look at distant objects. Moreover, the presence of haze is a fundamental cue for humans to perceive depth. This phenomenon is called aerial perspective. If haze is removed thoroughly, the image may seem unnatural, and we may lose the feeling of depth. Thus, a constant parameter (0<ω<1) is introduced into Eq. (10) to optionally keep a very small amount of haze for distant objects, and the final estimation of the transmission function is given by

In this paper, ω is fixed at 0.95 for all results reported. A fog-free image can be recovered by

where t0 is the minimum value of transmission function

Fig. 2 lists two groups of foggy images (Goose and House) and dehazed images based on dark channel. It can be seen that although the images are dehazed generally, it has obvious halo artifacts at occlusion boundaries. This is because the transmission function is not always invariant in local regions.

Fig. 2.Image dehazing based on dark channel prior: (a) original images, (b) transmission function maps, and (c) dehazed images.

It is worth noting that the commonly used constant assumption on transmission within a local image patch is somewhat demanding. For this reason, the patch-wise transmission based on this assumption in [13] is often underestimated. From Fig. 2(b), it can be seen that the transmission function is very rough, so it must be corrected. Because scattering coefficient β can be regarded as a constant in the homogeneous atmosphere condition, if scene depth is given, the transmission can be estimated easily according to Eq. (2). In most of the neighborhood, the scene depth changes slowly. Only in a small part of the neighborhood is there an abrupt change, which is the edge of the transmission function map. A rough transmission function requires some fine-tuning in a local patch of scene depth to highlight the region (edge) of the abrupt change in depth.

He et al. [19] proposed an algorithm of guided image filtering, which was applied to dehazing. The image is filtered by a preassigned guidance image, and the filtered image has similar characteristics to the guidance image. The key assumption of the guided filter is that filter output q is a linear transform of guidance I in a window ωk centered at pixel k :

where (ak,bk) are linear coefficients assumed to be constant in ωk, and their specific expressions are shown in Eq. (14):

where ε is a regularization parameter preventing ak from being too large, μk and are mean and variance of I in ωk, |ω| is the number of pixels in ωk, and is the mean of p in ωk. A pixel i is involved in all windows ωk that contain i , so the value of qi is not the same when it is computed in different windows. A simple strategy is to average all possible values of qi:

where

He et al. [19] used the characteristics of the guided filter to fix the rough transmission function, which can transfer the texture structure of the guidance image to the input image. By using a foggy image and transmission function map as the guidance image and input image, respectively, the transmission function map is refined and can be expressed as

where is the refined transmission function, I is foggy image, is rough transmission function in Eq. (11), and is the intensity average of in a local patch ωk centered at k .

Fig. 3 shows the experimental results of this method. Comparing Fig. 2(a) with Fig. 3(a), transmission function maps of Fig. 3(a) have fewer blocking artifacts and are more accurate especially at image edges. The quality of dehazed images in Fig. 3(b) is higher than in Fig. 2(b).

Fig. 3.Image dehazing of DCP method: (a) transmission function maps, and (b) dehazed images.

3.2 Proposed Image Dehazing Algorithm (Direct Denoising Method)

The essence of the DCP method is to transfer the texture structure of the foggy image to the transmission function map, and adjust the depth of the ocal scene. In this paper, a new method for calculating the transmission function is presented, and the specific algorithm is shown as follows. In this paper, the image dehazing method based on the proposed transmission function is referred to as the direct denoising method.

First, only one minimal operation is performed on both sides of Eq. (7); that is,

Second, the deformation of Eq. (19) is

Because of one minimal operation is lacking in the neighborhood Ω(x) , most values of dark channel cannot approximately equal 0. In this paper, is called the scene depth noise ψ, and Eq. (20) takes the following form:

where is called the transmission function Transmission function is different from transmission function obtained by Eq. (10). contains precise information of the transmission function, but this precise information is disturbed by additive scene depth noise. If can be denoised, we can obtain an accurate image transmission function.

Third, because ψ is negative noise, Eq. (21) requires additive reparation. We can obtain rough transmission function which is

where ψ' is the weakened noise, and ω the is reduction coefficient which is used to set the level of noise reduction.

Finally, except for the large mutation at some boundaries, scene depth is smooth in most regions, which is very suitable for edge-preserving image smoothing filtering. In this paper, we will use median filtering, non-local means (NLM) filtering [29] and bilateral filtering [18] for the proposed transmission function to refine and investigate their effects on image compression.

For bilateral filtering [18], the transmission function after denoising is shown by Eq. (23):

where k is the normalizing constant, ωi is a window centered at pixel i , and c and s are the kernel functions of the bilateral filter. c measures geometric closeness between the neighborhood center i and a nearby point x , and s measures the photometric similarity between the pixel at neighborhood center i and that of a nearby point x . k(i) , c(x,i) , and s(x,i) are given by Eq. (24) and Eq. (25):

where d is the Euclidean distance between i and x , δ(x,i) is a suitable measure of distance between two intensity values f(i) and f(x) , σd is the geometric spread, and σr is the photometric spread.

For NLM filtering [29], let v(i) be the observed noisy image, where i is the pixel index. The restored values can be derived as the weighted average of all gray values in the image (indexed in set I )

where the family of weights {ω(i,j)}j depends on the similarity between the pixels i and j and satisfy the usual conditions 0≤ω(i,j)≤1 and

The similarity between two pixels i and j depends on the similarity of the intensity gray level vectors v(Ni) and v(Nj), where Nk centered at a pixel k denotes a square neighborhood with a fixed size. This similarity is measured as a decreasing function of the weighted Euclidean distance, where a>0 is the standard deviation of the Gaussian kernel.

The pixels with a similar grey level neighborhood to v(Ni) have larger weights on average. These weights are defined as

where is the normalizing constant and parameter h controls the decay of the exponential function.

Therefore, the transmission function after denoising is shown by Eq. (28):

Fig. 4 and Fig. 5 show refined transmission function maps and dehazed images, respectively, of the proposed image dehazing method using median filtering (Fig. 4(a) and Fig. 5(a)), NLM filtering (Fig. 4(b) and Fig. 5(b)) and bilateral filtering (Fig. 4(c) and Fig. 5(c)).

Fig. 4.Image dehazing of direct denoising method: (a) median filtering, (b) NLM filtering, and (c) bilateral filtering; the left images are transmission function maps, and the right ones are dehazed images.

Fig. 5.Image dehazing of direct denoising method: (a) median filtering, (b) NLM filtering, and (c) bilateral filtering; the left images are transmission function maps, and the right ones are dehazed images.

By comparing Fig. 3, Fig. 4, and Fig. 5, the refined transmission function maps obtained by image-guided filter are clearer than images obtained by three other dehazing contrast-enhancement filters. There is a large amount of information in transmission function maps refined by guided image filtering. In four dehazing contrast-enhancement filters, the best edge-preserving characteristic is the guided image filter, followed by the bilateral filter, median filter, and NLM filter.

By observing dehazed images obtained by using four different dehazing contrast-enhancement filters (Fig. 3, Fig. 4, and Fig. 5), the image dehazing method of both the DCP method and direct denoising method can obtain clear dehazed images. In addition, the dehazed images include more details, and the color is brighter. Therefore, four dehazing contrast-enhancement filters can be used for image dehazing. However, there is still a certain white outline in Fig. 3(b) and the right images in Fig. 4(b) and Fig. 5(b). The guided image filtering and bilateral filtering could obtain more natural dehazed images, as seen in Fig. 3(b) and the right images in Fig. 4(c) and Fig. 5(c).

 

4. Effects of Image Dehazing Methods on Image Compression

In this section, we will analyze the effects of image dehazing methods on image compression.

4.1 Ringing Artifacts

When an image is decompressed, ringing artifacts will occur when frequency components are lost on the compression side. This loss is caused in the quantization step (see Fig. 1 and Eq. (5)). Assuming that there is a foggy image I , Eq. (29) can be obtained according to Eq. (12):

where T(x) is the transmission function obtained by the dehazed methods of the DCP method or direct denoising method, and J is the dehazed image. The DCT transform is applied to dehazed image J ,

where (Jc)i is the ith matrix block of the dehazed image.

Using Eq. (29) and Eq. (30), the DCT of the ith matrix block can be expressed as follows

Ignoring the direct-current (DC) component of the DCT transform, the alternating-current (AC) components are

The matrices of the DCT transform are quantized by quantization factor q(u,v)=kQuv, where k is the quantization coefficient, and we obtain

where operator ⎿x+1/2⏌ is simply a rounding to the nearest integer operation on x . Therefore, when Fi(u,v)/q(u,v)˂1/2 , the energy of the frequency in (u,v) will be eliminated. This is also the main reason for the energy loss caused by the JPEG standard. The probability of the eliminated energy of the frequency in (u,v) is expressed as Eq. (34):

With Eq. (32) and Eq. (34), we can compute the probability using dehazed AC coefficients; i.e.,

Because the transmission function changes slowly in the local region, the transmission function of the ith matrix block can be treated as a constant Ti(x,y)=Ti, and then Eq. (35) becomes

For the convenience of analysis, let Φu,v(X) and Ki(x,y) be defined as

Thus, Eq. (36) can be denoted as

The main difference in the image dehazing methods lies in obtaining different transmission functions. For Ki(x,y) , these methods are the same (assuming that atmospheric light Ac is accurate). Therefore, according to Eq. (39), under the same Ki(x,y), the smaller the value of Ti , the smaller the probability of energy eliminated at the frequency (v,u) .

The rough transmission functions obtained by using the method of the DCP method and direct denoising method are shown by Eq. (40) and Eq. (41), respectively:

Comparing Eq. (40) with Eq. (41), it can be easily derived that

Because the refined rough transmission function is based on the characteristic that the transmission function changes slowly in the local region, we can know that the refined transmission function still has the following relationship:

Combining Eq. (39) with Eq. (43), we can easily obtain the following relationship:

From Eq. (44), we can obtain the following conclusion: compressed dehazed images obtained by the DCP method are more likely to have loss of image energy and ringing artifacts.

4.2 Blocking Artifacts

The cause of blocking artifacts in lossy compression is artificial boundaries between neighboring blocks induced by BDCT. We will compare the ith reconstructed block , which is simply a dequantization and inverse BDCT of ; i.e.,

For the convenience of analysis, the dehazing function D(·) is defined as

Based on different image dehazing methods, the transmission T(x) is defined as respectively. The reconstructed block can be characterized as the original signal plus the reconstruction noise εr, so the reconstructed dehazed image is

From Eq. (43), we can see that Thus, we can obtain the relationship between the dehazed method of the DCP method and the direct denoising method (supposing εr is same for different methods); i.e.,

With E[εr]=0, the expected value of Dfinal(Ic) equals Jc , and the variance is

where var[x]=.

For the signal-to-noise ratio (SNR) on the reconstructed (or decompressed) end of the system, we can obtain the inequality shown as follows

From Eq. (50), we can see that the image dehazing method using the direct denoising method is better than the DCP method.

Fig. 6 and Fig. 7 show reconstructed images using different image dehazing methods. By observing Fig. 4, Fig. 5, Fig. 6, and Fig. 7, the images in Fig. 6 and Fig. 7 have severe noise after compression with quantization coefficient k=3. The reconstructed images using the direct denoising method have better subjective visual quality than reconstructed images using the DCP method. This is consistent with the previous conclusion in Eq. (44) and Eq. (50).

Fig. 6.Compressed dehazed images using different image dehazing methods: (a) median filtering, (b) NLM filtering, (c) bilateral filtering, and (d) guided image filtering.

Fig. 7.Compressed dehazed images using different image dehazing methods: (a) median filtering, (b) NLM filtering, (c) bilateral filtering, and (d) guided image filtering.

 

5. Improved Direct Denoising Method

Based on the analysis of Section 3 and Section 4, we can conclude that the direct denoising method performs better than the DCP method in image compression. Based on the new method of calculating the transmission function in Section 3.2, we will continue to refine the transmission function and propose a new image dehazing method based on bilateral filtering. For simplicity, transmission function t(x) mentioned below is expressed as

Transmission function t(x) is the transmission of reflected light, which is determined by scene depth d(x) . From Eq. (2), we can see that t(x) is inversely proportional to the scene depth. If d(x) is sufficiently large, t(x) tends to be very small. In other words, the values of t(x) are smaller in the far scene and larger in the close scene. However, according to Eq. (22), the calculated values of t(x) become small if some regions of the hazy image have high RGB values. Under these circumstances, objects with a brighter color may be mistaken for the far scene, which causes some of the brighter part of the close scene to be very bleak. Thus, it is necessary to distinguish between the far scene and high-brightness object in the close scene.

In one image, the high brightness value object stands out with respect to its neighborhood. Therefore, we can roughly distinguish the far scene from the high-brightness value object in the close scene by using a saliency map [30]. By analyzing the log-spectrum of an input image, Hou and Zhang [31] extracted the spectral residual of an image in the spectral domain and proposed a fast method to construct the corresponding saliency map in the spatial domain. This model is independent of features, categories, or other forms of prior knowledge of objects. Thus, in this paper, we adopt their method to realize distinction.

To distinguish between the far scene and high-brightness object in the close scene, the brightness value of the pixel is also considered when the transmission function is refined. The refined transmission function is shown by Eq. (51):

where m is the average gray of t(x) , k is adjustable coefficient, and S(x) is the saliency map. t(x) / km is used to measure the relative brightness value of pixel points. For the close scene with bright colors, log(1+S(x)) will increase the value of the corresponding transmission t(x) . For the far scene, pixel values of S(x) approximately equal 0, and tsaliency(x) approximately equals t(x) . Fig. 8 and Fig. 9 show examples of transmission function maps and restored images according to different values of t(x) . t(x) of Fig. 8(a) and Fig. 9(a) are whereas Fig. 8(b) and Fig. 9(b) are tsaliency(x).

Fig. 8.Examples of different transmission functions: (a) and (b) tsaliency (x) .

Fig. 9.Examples of different transmission functions: (a) and (b) tsaliency (x) .

From Fig. 8 and Fig. 9, we can distinguish bright objects from the far scene. Our new image dehazing method has achieved better dehazing effects: it not only removes mist but also well preserves true color and brightness in the close scene. The dehazed images also have higher contrast.

 

6. Experimental Results

In Section 4, this paper qualitatively analyze the effects of two different image dehazing methods (DCP method and direct denoising method) on image compression. Four different dehazing contrast-enhancement filters are used in two different image dehazing methods: guided image filter [19], median filter, NLM filter [29], and bilateral filter [18]. In Section 3, subjective experimental results of image dehazing using these dehazing contrast-enhancement filters are given. The experimental effects of these dehazing contrast-enhancement filters are influenced by their parameters. Thus, in this section, this paper will quantitative analyze the effects of aforementioned filters and their parameters on image compression.

When images are compressed with JPEG, the luminance quantization coefficient and color quantization coefficient are 3 in this experiment. We choose the House image as the test image. The evaluation criteria are CPSNR and BIQA proposed in [32] for quantitative analysis of the experimental results. We calculate CPSNR of the dehazed image and reconstructed image. Using different image dehazing methods, we will obtain different dehazed images, and image compression bit rates are also different. Thus, we improve BIQA and use the ratio of BIQA scores to compression bit rates as the new BIQA. Fig. 10 shows the block diagram of this experimental process.

Fig. 10.Block diagram of experimental process.

For the median filter, the window size is the significant parameter. In this paper, we choose 3×3, 5×5 , 7×7 , 9×9 , and 11×11 for testing. Table 1 shows the experimental results. From Table 1, a median filter with a 3×3 window size has the best image processing effect. When the window size becomes large, blocking artifacts become serious.

Table 1.Effects of window size of median filter on image compression

For the NLM filter, parameter h controls the smoothness of the denoising results. For a denoised image of size m×m , the size of the patch and research window are n×n and w×w , respectively, and the value of n is usually 5, 7, 9, or 11. In general, the value of w meets w<

Table 2.Effects of different parameters of NLM filter on image compression

The bilateral filter has two parameters: σd and σr. We will fix one and change another to research the effects of both parameters on image compression. The fixed values of σd and σr are 15 and 0.3, respectively. The guided image filter also has two parameters: ε and r . We will also fix one and change the other to research the effects of two parameters on image compression. The fixed ε and r are 0.001 and 20, respectively. Fig. 11 and Fig. 12 give experimental results of two different filters.

Fig. 11.Effects of bilateral filter parameters on image compression: (a) effects of σd on CPSNR and BIQA, and (b) effects of σr on CPSNR and BIQA.

Fig. 12.Effects of guided filter parameters on image compression: (a) effects of r on CPSNR and BIQA, and (b) effects of ε on CPSNR and BIQA.

Fig. 11(a) shows the effect of the bilateral filter on the reconstructed dehazed image with different σd and fixed σr=0.3. With increasing of σd , the CPSNR and BIQA scores of decompressed dehazed images increase slightly. The image quality is almost unaffected by σd . Fig. 11(b) shows the effect of the bilateral filter on the reconstructed dehazed image with different σr and fixed σd=15. With increasing of σr, not only CPSNR but also the BIQA scores of the decompressed dehazed images decrease. From Fig. 12(a), it can be seen that the larger the value of r , the better the image quality. From Fig. 12(b), it can be seen that the larger the value of ε , the worse the image quality.

Comparing Table 1, Table 2, Fig. 11, and Fig. 12, we can see that the image quality of the decompressed images processed by the NLM filter are the best among the images processed by four different dehazing contrast-enhancement filters. These experimental results are consistent with the conclusions of our analysis in Section 4.

 

7. Conclusion

The image transmission of an outdoor monitoring system plays an important role in daily life. With the increasing frequency of hazy weather, the means by which to obtain a clear image in such bad weather conditions at the transmission terminal is an urgent problem to be solved. In this paper, we propose a new method for calculating the image transmission function. The single-image dehazing method using the proposed method is called the direct denoising method in this paper. We discuss and analyze different effects of the DCP method and direct denoising method on image dehazing and image compression. We use four different dehazing contrast-enhancement filters to conduct a simulation experiment. The simulation results are consistent with the analysis that reconstructed images dehazed by the direct denoising method have better image quality. To obtain more desirable dehazed images with higher contrast, we improve the direct denoising method by using the saliency map as a guidance image to distinguish between the far scene and the bright area in the close scene. Therefore, the transmission function is more elaborate, and dehazed images have higher contrast. Finally, a large number of experiments have been carried out to explore the influence of different filter parameters on image compression. We can draw a conclusion: not only dehazed methods but also dehazing contrast-enhancement filters have affected image compression. Our proposed and improved image dehazing method can strike a balance between image dehazing and image compression.

In this paper, the analysis is based on a single image. However, in reality, in addition to images, outdoor monitoring systems transmit video. Thus, it is necessary to study the influence of the method on video compression based on dark channel. In the future, we will focus on the influence of dehazing methods on MPEG-x/H.26x compressed video.

References

  1. G. Woodell, D. J. Jobson, Z.-U. Rahman, and G. Hines, "Advanced image processing of aerial imagery," in Proc. of the International Society for Optical Engineering - Visual Information Processing XV, Kissimmee, USA, Apr. 18-19, 2006, vol. 6246, 12 pages, Article number: 62460E. Article (CrossRef Link).
  2. J. W. Han, D. W. Zhang, G. Cheng, L. Guo, and J. C. Ren, “Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 3325-3337, Jun. 2015. Article (CrossRef Link). https://doi.org/10.1109/TGRS.2014.2374218
  3. J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Transactions on Image Processing, vol. 9, no. 5, pp. 889-896, May 2000. Article (CrossRef Link). https://doi.org/10.1109/83.841534
  4. J.-Y. Kim, L.-S. Kim, and S.-H. Hwang, “An advanced contrast enhancement using partially overlapped sub-block histogram equalization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 4, pp. 475-484, Apr. 2001. Article (CrossRef Link). https://doi.org/10.1109/76.915354
  5. D. Miyazaki, D. Akiyama, M. Baba, R. Furukawa, S. Hiura, and N. Asada, "Polarization-based dehazing using two reference objects," in Proc. of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, Dec. 2-8, pp. 852-859, 2013. Article (CrossRef Link).
  6. S. Shwartz, E. Namer, and Y. Y. Schechner, "Blind haze separation," in Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, USA, Jun. 17-22, 2006, pp. 1984-1991. Article (CrossRef Link).
  7. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp.713-724, Jun. 2003. Article (CrossRef Link). https://doi.org/10.1109/TPAMI.2003.1201821
  8. S. K. Nayar and S. G. Narasimhan, "Vision in bad weather," in Proc. of the IEEE International Conference on Computer Vision, Kerkyra, Greece, Sep. 20-27, vol. 2, pp. 820-827, 1999. Article (CrossRef Link).
  9. S. G. Narasimhan and S. K. Nayar, "Interactive (de)weathering of an image using physical models," in Proc. of the IEEE Workshop on Color and Photometric Methods in Computer Vision, Nice, France, Oct. 13-16, 8 pages, 2003. Article (CrossRef Link).
  10. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” ACM Transactions on Graphics, vol. 27, no. 5, 10 pages, Article number: 116, Dec. 2008. Article (CrossRef Link). https://doi.org/10.1145/1409060.1409069
  11. R. T. Tan, "Visibility in bad weather from a single image," in Proc. of the 26th IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, USA, Jun. 23-28, 8 pages, Article number: 4587643, 2008. Article (CrossRef Link).
  12. R. Fattal, "Single image dehazing," ACM Transactions on Graphics, vol. 27, no. 3, 9 pages, Article number: 72, Aug. 2008. Article (CrossRef Link). https://doi.org/10.1145/1360612.1360671
  13. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec. 2011. Article (CrossRef Link). https://doi.org/10.1109/TPAMI.2010.168
  14. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228-242, Feb. 2008. Article (CrossRef Link). https://doi.org/10.1109/TPAMI.2007.1177
  15. J. Yu, C. B. Xiao, and D. P. Li, "Physics-based fast single image fog removal," in Proc. of the International Conference on Signal Processing, Beijing, China, Oct. 24-28, pp. 1048-1052, 2010. Article (CrossRef Link).
  16. K. B. Gibson, D. T. Vö, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 662-673, Feb. 2012. Article (CrossRef Link). https://doi.org/10.1109/TIP.2011.2166968
  17. C. X. Xiao and J. J. Gan, “Fast image dehazing using guided joint bilateral filter,” Visual Computer, vol. 28, no. 6-8, pp. 713-721, Jun. 2012. Article (CrossRef Link). https://doi.org/10.1007/s00371-012-0679-y
  18. C. Tomasi and R. Manduchi, "Bilateral filtering for gray and color images," in Proc. of the IEEE International Conference on Computer Vision, Bombay, India, Jan. 4-7, 1998, pp. 839-846. Article (CrossRef Link).
  19. K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, Jun. 2013. Article (CrossRef Link). https://doi.org/10.1109/TPAMI.2012.213
  20. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, "A fast semi-inverse approach to detect and remove the haze from a single image," in Proc. of the 10th Asian Conference on Computer Vision, Queenstown, New Zealand, Nov. 8-12, Lecture Notes in Computer Science, vol. 6493, Part 2, pp. 501-514, 2010. Article (CrossRef Link).
  21. L. Kratz and K. Nishino, "Factorizing scene albedo and depth from a single foggy image," in Proc. of the 12th IEEE International Conference on Computer Vision, Kyoto, Japan, Sep. 29-Oct. 2, pp. 1701-1708, 2009. Article (CrossRef Link).
  22. K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,” International Journal of Computer Vision, vol. 98, no. 3, pp. 263-278, Jul. 2012. Article (CrossRef Link). https://doi.org/10.1007/s11263-011-0508-1
  23. G. F. Meng, Y. Wang, J. Y. Duan, S. M. Xiang, and C. H. Pan, "Efficient image dehazing with boundary constraint and contextual regularization," in Proc. of the 14th IEEE International Conference on Computer Vision, Sydney, Australia, Dec. 1-8, pp. 617-624, 2013. Article (CrossRef Link).
  24. K. Tang, J. C. Yang, and J. Wang, "Investigating haze-relevant features in a learning framework for image dehazing," in Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, USA, Jun. 23-28, pp. 2995-3002, 2014. Article (CrossRef Link).
  25. Z. G. Li, J. H. Zheng, Z. J. Zhu, W. Yao, and S. Q. Wu, “Weighted guided image filtering,” IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 120-129, Jan. 2015. Article (CrossRef Link). https://doi.org/10.1109/TIP.2014.2371234
  26. Z. G. Li and J. H. Zheng, “Edge-preserving decomposition-based single image haze removal,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5432-5441, Dec. 2015. Article (CrossRef Link). https://doi.org/10.1109/TIP.2015.2482903
  27. E. J. McCartney, Optics of the atmosphere: scattering by molecules and particles, John Wiley and Sons, New York, 1976. Article (CrossRef Link).
  28. G. K. Wallace, “The JPEG still picture compression standard,” Communications of the ACM, vol. 34, no. 4, pp. 30-44, Apr. 1991. Article (CrossRef Link). https://doi.org/10.1145/103085.103089
  29. A. Buades, B. Coll, and J.-M. Morel, "A non-local algorithm for image denoising," in Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, USA, Jun. 20-25, vol. 2, pp. 60-65, 2005. Article (CrossRef Link).
  30. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254-1259, Nov. 1998. Article (CrossRef Link). https://doi.org/10.1109/34.730558
  31. X. D. Hou and L. Q. Zhang, "Saliency detection: A spectral residual approach," in Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Minneapolis, USA, Jun. 17-22, 8 pages, Article number: 4270292, 2007. Article (CrossRef Link).
  32. Z. Wang, H. R. Sheikh, and A. C. Bovik, "No reference perceptual quality assessment of JPEG compressed images," in Proc. of the IEEE International Conference on Image Processing, Rochester, USA, Sep. 22-25, vol. 1, pp. 477-480, 2002. Article (CrossRef Link).

Cited by

  1. Single Image Dehazing Based on Depth Map Estimation via Generative Adversarial Networks vol.19, pp.5, 2016, https://doi.org/10.7472/jksii.2018.19.5.43