DOI QR코드

DOI QR Code

Single Low-Light Ghost-Free Image Enhancement via Deep Retinex Model

  • Liu, Yan (School of Computer and Communication Engineering, Zhengzhou University of Light Industry) ;
  • Lv, Bingxue (School of Computer and Communication Engineering, Zhengzhou University of Light Industry) ;
  • Wang, Jingwen (School of Computer and Communication Engineering, Zhengzhou University of Light Industry) ;
  • Huang, Wei (School of Computer and Communication Engineering, Zhengzhou University of Light Industry) ;
  • Qiu, Tiantian (School of Computer and Communication Engineering, Zhengzhou University of Light Industry) ;
  • Chen, Yunzhong (School of Electronic Information and Electrical Engineering, Shanghai Jiaotong University)
  • Received : 2021.03.02
  • Accepted : 2021.05.12
  • Published : 2021.05.31

Abstract

Low-light image enhancement is a key technique to overcome the quality degradation of photos taken under scotopic vision illumination conditions. The degradation includes low brightness, low contrast, and outstanding noise, which would seriously affect the vision of the human eye recognition ability and subsequent image processing. In this paper, we propose an approach based on deep learning and Retinex theory to enhance the low-light image, which includes image decomposition, illumination prediction, image reconstruction, and image optimization. The first three parts can reconstruct the enhanced image that suffers from low-resolution. To reduce the noise of the enhanced image and improve the image quality, a super-resolution algorithm based on the Laplacian pyramid network is introduced to optimize the image. The Laplacian pyramid network can improve the resolution of the enhanced image through multiple feature extraction and deconvolution operations. Furthermore, a combination loss function is explored in the network training stage to improve the efficiency of the algorithm. Extensive experiments and comprehensive evaluations demonstrate the strength of the proposed method, the result is closer to the real-world scene in lightness, color, and details. Besides, experiments also demonstrate that the proposed method with the single low-light image can achieve the same effect as multi-exposure image fusion algorithm and no ghost is introduced.

Keywords

1. Introduction

Low-light images are captured under dim lighting conditions due to inevitable environmental or technical constraints, which degenerate the performance of algorithms and the visual effects of images[1]. The essential goal of digital photography is responsible for reproducing the natural scene with minimal artifacts from the RAW data. High-quality images can be used for many high-level visual tasks, such as image segmentation[2], object tracking[3,4], object detection[5-7], and image classification[8-10]. However, weak illumination or low-light images have low visibility, intensive noises, low dynamic range, low signal-to-noise ratio (SNR), and color distortion, which hinders the above visual tasks. Hence, increasing number of researchers have expressed their keen interest in low-light image enhancement[11].

 Traditional single image contrast enhancement techniques include Histogram equalization (HE)[12,13], Retinex[12,13], and High dynamic range(HDR) [14,15]. HE methods enhance the low-light images by adjusting the contrast of the image, but it inevitably brings undesirable illumination. Besides, the methods of HE without considering the image brightness degradation mechanism, which amplifies the image noise of the results and not suitable for complex low-light scenes. Retinex is divided into single scale Retinex(SSR)[16] and multi-scale Retinex(MSR)[17], the former is based on the Gabor filter to enhance the image, while the latter may result in the color distortion of the enhanced image and often looks unnatural. Existing low-light image HDR enhancement methods attempt to expand the range of luminance and enhance the lightness of the images. However, there will be artifacts in the saturated areas, which is not acceptable in high-quality computer vision.

 Recently, deep learning-based approaches have achieved a remarkable traction, which also motivate the development of new deep learning-based approaches[18-21] for low-level image processing tasks, including super-resolution[22,23], rain removal[24,25], hyperspectral image[26,27], and so on. Shen et al.[28] proposed the MSR-net to enhance the low-light image by learning a mapping from the low-light images to normal-light images. Chen et al.[29] introduced a low-light raw images dataset(SID) and developed an algorithm for processing these low-light raw images based on a deep learning network.

 Inspired by Retinex theory and deep learning, in this paper, a novel deep learning approach is proposed, which learns a mapping from illumination to illumination based on the Retinex theory, to enhance the low-light images. To improve the quality and dynamic range of the image, a super-resolution algorithm is introduced based on the Laplacian pyramid network[30] to optimize the enhanced image. Besides, combination loss function is introduced during the training. We analyze the performance of the proposed algorithm with different datasets and different algorithms, such as Histogram Equalization(HE), Gray World Algorithm (GWA), Automatic White Balance (AWB), Gamma adjustment (GA), Dong et al.[31], low light enhancement methods LIME[32], RetinexNet[33], and KinD[34]. Several distinct metrics are utilized to evaluate and compare the enhanced image, such as Peak Signal-to-Noise Ratio (PSNR), Entropy, Structural SIMilarity(SSIM), Natural image quality evaluator (NIQE)[35], Perception based Image Quality Evaluator (PIQE)[36], and Runtime. The experimental results show that the proposed algorithm achieves superior performance than existing methods.

 We highlight our main contributions as follows:

● A new extreme low-light image dataset with 1331 images are constructed for the network training, which consists of three parts: 347 images of the LOL[33] dataset, 735 images of the SICE[37], and 249 images are captured by our camera, each low light image has a corresponding normal-light image.

● A novel low-light image enhancement algorithm is designed, which firstly learns an illumination component to illumination component mapping to produce a predicted illumination component, then the predicted illumination component and the reflectance component of the normal-light are fused to reconstruct the enhanced image.

● A super-resolution algorithm is developed based on the Laplacian pyramid network, which reduces the noise of the images and enhances the image contrast effectively.

● We explore a combination loss function during network training, which consists of reconstruction loss, image perception loss, and color loss. Qualitatively and quantitatively experiment results demonstrate that our method outperforms existing methods at improving the lightness and color of the images, restoring the detailed information and runtime.

2. Related works

 Image enhancement has an abundant history in computer vision fields, which has been extensively studied before. A short review of existing methods is provided as follows:

 Conventional Methods. Low-light image enhancement has been solved with all kinds of techniques. One of the most widely used technique is HE, which is the earliest low-light enhancement method. Its limitation is obvious that by globally adjusting the contrast in the entire image and lack of processing in image details. Another is gamma correction, which enhances the brightness of the low-light area by compressing bright pixels[29]. Subsequently, Li et al.[38] proposed a novel united low-light image enhancement framework by applying dehazing methods for both image contrast enhancement and denoising. Celik et al.[39] proposed an algorithm that uses a mutual relationship between each pixel and its neighboring to enhance the input image. Those methods suppress noise and increase the contrast of the image, but bring detail blurriness inevitably.

 Later on, Retinex-based methods via decomposing the image into illumination component and reflectance component and then adjusting them adaptively achieved illumination adjustment and noise suppression [40]. Jobson et al.[16] described the trade-off between rendition and image dynamic range compression, and find the best rendition of the gain or offset. Later, Jobson et al.[41] presented single-scale center Retinex, which achieves the lightness rendition and color correction. The above methods have been popularly employed, but the ground truth is not given. Ying et al.[42] designed an approach using the response characteristics of cameras, the textural details are successfully restored, but the enhanced image is still exists noise.

 Low-light enhancement via CNNs. Compared with traditional methods, CNNs have better feature extraction ability, which has yielded promising results due to the powerful calculating capacity[43]. Yang et al.[44] proposed a method to enhance the low-light images using coupled dictionary learning. Lore et al.[45] utilized auto-encoder network to learn contrast enhancement and denoising at the same time. However, no recent developments in deep learning are used in the above methods. In recent years, Wei et al.[33] proposed a Retinex-Net, in which the Decom-Net is utilized for image decomposition and the Enhance- Net is utilized for illumination adjustment. Cai et al.[37] proposed a single image contrast enhancement (SICE) method from multi-exposure image sequences. Ren et al.[46] proposed an encoder-decoder hybrid network to enhance the input low-light image. Although these methods achieve considerable success, performing in extremely dark images is still challenging, the generated images are often blurry and noisy.

3. Our Approach

 In this section, the proposed method of low-light image enhancement based on deep learning and the Retinex model will be described in detail. We illustrate our motivations at first, and then the network architecture and loss function are introduced in detail. Besides, a super-resolution method based on the Laplacian pyramid network is described to optimize the enhanced result.

3.1 Motivation

 The Retinex theory is exploited by Land and McCann[47], which is come from the human visual system's perception of image color and brightness values. The reflection of light by an object is an inherent property and does not change with the change of light. Motivated by the Retinex theory, we design a deep image enhancement approach that performs the illumination component and reflectance component of the inputs to light up the dark regions. Retinex theory regards an image as the product of a reflectance map and an illumination map, as shown in (1):

\(I(x, y)=R(x, y) \circ L(x, y)\)       (1)

 Where ( x , y ) represents the position of the pixel, I ( x , y ) is the real-word image, R( x , y ) and L( x , y ) denote the reflectance map and the illumination map of the image, respectively.

3.2 Network Architecture

 Our method can be divided into four steps, which include decomposition, illumination estimation, reconstruction, and image optimization. The paired images with different light conditions are utilized as the inputs of the network during the training stage. The network framework is shown in Fig. 1.

E1KOBZ_2021_v15n5_1814_f0001.png 이미지

Fig. 1. The CNN network architecture of proposed method.

 The input low-light image is decomposed into reflectance map and illumination map based on Decom-Net in the decomposition step. Decom-Net can learn the decomposition from the input low-light and normal-light image pairs, which has five 3*3 convolutional layers with the ReLU layer that is utilized to extract features from the input images and a Sigmoid layer is utilized to constrain reflectance map and the illumination map in the range of [0,1].

 In the illumination estimation step, the illumination component of low-light images can be predicted by learning the mapping from the illumination components of low-light images to the corresponding illumination component of normal exposure images. The illumination predicting network includes nine 3*3 convolutional layers with ReLU and a fully connected layer. Different from KinD network, which adjusts the illumination component by the ratio of a source light Ls to a target one Lt , we adopt the mapping of illumination-to-illumination to generate the predicted illumination component \(\tilde{I}_{p r e}\), which can be expressed as (2):

\(\tilde{I}_{p r e}=F\left(\tilde{I}_{\text {low }}, \tilde{I}_{\text {normal }}\right)\)       (2)

 Where F is the map function, \(\tilde{I}_{low}\) and \(\tilde{I}_{normal}\) represent the illumination component of the input low-light image and the illumination component of the corresponding normal-light image, respectively.

 In the image reconstruction step, the predicted illumination component and the reflectance component of the normal-light image are fused by element-wise multiplication to produce the enhanced image.

 The enhanced image usually suffers from noise and low-resolution, to enhance image details and have a better visual effect, a Laplacian pyramid super-resolution network[48] is introduced in the image optimization step, which extracts the low-resolution features from the enhanced image by convolutional layer, and then four times up-sample the extracted features by two transposed convolutional layers to generate the residual image. The 4×SR image is generated by fusing the reconstruction image and the residual image. Similarly, it can achieve eight times up-sample by three transposed convolutional layers and generate an 8×SR image. The Laplacian pyramid has five levels and each level shares weights across and within pyramid levels, which decreases the network calculation and promotes the efficiency of the network.

3.3 Loss Function

 During the network training, we design a combination loss function \(\mathcal{L}\), which includes three components: reconstruction loss function \(\mathcal{L}_{\text {recon }}\), image perception loss function \(\mathcal{L}^{p}\), and color loss function \(\mathcal{L}^{c}\):

\(\mathcal{L}=\mathcal{L}_{\text {recon }}+\lambda_{p} \mathcal{L}^{p}+\lambda_{c} \mathcal{L}^{c}\)       (1)

 Where λp and λc denote the corresponding weights of \(\mathcal{L}^{p}\) and \(\mathcal{L}^{c}\), respectively. To obtain the reconstruction image, the reconstruction loss function \(\mathcal{L}_{\text {recon }}\) can be expressed as (4):

\(\mathcal{L}_{\text {recon }}=\sum_{i=\text { low,normal } ~} \sum_{j=\text { low }, \text { normal }} \lambda_{i, j}\left\|R_{i} \circ \tilde{I}_{p r e}-S_{j}\right\|_{1}\)       (2)

 Where Ri is the reflectance component, Sj denotes the input low-light image, and λi,j is the reconstruction coefficient.

 The illumination in natural images is usually locally smooth, reconstruction loss function \(\mathcal{L}_{\text {recon }}\) might lead to the generated images lack of high frequency information and appear over-smooth issues. Hence, an image perception loss function \(\mathcal{L}^{p}\) is introduced into the network, which compares the illumination difference between the illumination component of the input image and the corresponding normal-light image. Optimizing the prediction loss function \(\mathcal{L}^{p}\) can make the generated predicted illumination component \(\tilde{I}_{p r e}\) closer to the normal-image light component \(\tilde{I}_{normal}\), to effectively reconstruct the high-frequency image information and make the generated image contain more detailed information. The perception loss function \(\mathcal{L}^{p}\) can be expressed as (5):

\(\mathcal{L}^{p}=\frac{1}{W H} \sum_{x=1}^{W} \sum_{y=1}^{H}\left(\left(\tilde{I}_{\text {normal }}\right)_{x, y}-\left(\tilde{I}_{\text {low }}\right)_{x, y}\right)^{2}\)       (3)

 Where W and H represent the width and the height of the illumination component separately.

 To make the output image Ioutput more vivid in the color space, the color loss function \(\mathcal{L}^{c}\) is utilized to formulate the color loss between the output image I and normal-image Inormal, which can be expressed as (6):

\(\mathcal{L}^{c}=\sum_{p} \angle\left(\left(I_{\text {output }}\right)_{p},\left(I_{\text {normal }}\right)_{p}\right)\)       (4)

 Where (●)p represents a pixel of the image, ∠( , ) is utilized to calculate the angle between two of the RGB colors.

4. Experiments results

 In this section, we analyze the performance of the proposed algorithm qualitatively and quantitatively. Different datasets are utilized for training and testing processing in our experiments. The proposed method based on Tensorflow framework and CNN model to train 2000 iterations, the learning rate is 0.0001. To maintain the fairness of the experiments, all the experiments are performed on a PC running Windows 10 OS with 32G RAM and 3.6GHz CPU, NVIDIA GeForce RTX 2080 Ti 11GB GPU.

4.1 Image Dataset

 To learn the parameters of the proposed network, we assemble a new extreme low-light image training dataset with 1331 images for the network training, including 347 images of the LOL dataset[33], 735 images of the multi-exposure images dataset[37], and 249 images are captured by our camera, each of them has a corresponding normal-light image. As shown in Fig. 2, our dataset has a broad range of lighting conditions and scenes. Besides, we also verify the effectiveness of our method on synthetic dataset[49], of which 50 low-light images are selected for the test set in our experiment.

E1KOBZ_2021_v15n5_1814_f0002.png 이미지

Fig. 2. Several examples of low-light images in our dataset.

4.2 Qualitative analysis

 To evaluate the enhancing effect of the proposed method, we perform the visual comparisons on low-light images, as show in Fig. 3 and Fig. 4. By checking the details, such as the white cloud in Fig. 3(a), our method not only has the better optical effects in image color and contrast, but also has the fewer artifacts. While the conventional algorithms HE, GWA, AWB, GA, Dong[31], LIME[32], and deep learning-based methods RetinexNet[33], KinD[34] cannot recover the cloud clearly. HE and RetinexNet produces serious halo and blur in Fig. 4(b). and Fig. 4(g). separately.

 We also process low-light images of the synthetic dataset[49] by different methods to quantify the subjective visual quality. The results are shown in Fig. 5 and Fig. 6, from which we can observe clearly that our method improves the lightness and color of the images, restore the detailed information and make the enhanced image closer to the real-world scene. As we can see, the result of LIME[32] tends to be over-exposure in some areas and the algorithm proposed by Dong[31] usually produces inconceivable black halo.

E1KOBZ_2021_v15n5_1814_f0003.png 이미지

Fig. 3. The enhanced results based on the proposed dataset. (a)Input, (b)HE, (c)GA, (d)GWA, (e)AWB, (f)LIME, (g)RetinexNet, (h)Dong, (i)KinD, (j)Ours.

E1KOBZ_2021_v15n5_1814_f0004.png 이미지

Fig. 4. The enhanced results based on the proposed dataset. (a)Input, (b)HE, (c)GA, (d)GWA, (e)AWB, (f)LIME, (g)RetinexNet, (h)Dong, (i)KinD, (j)Ours.

E1KOBZ_2021_v15n5_1814_f0005.png 이미지

Fig. 5. The enhanced results based on the synthetic dataset. (a)Input, (b)HE, (c)GA, (d)GWA, (e)AWB, (f)LIME, (g)RetinexNet, (h)Dong, (i)KinD, (j)Ours.

E1KOBZ_2021_v15n5_1814_f0006.png 이미지

Fig. 6. The enhanced results based on the synthetic dataset. (a)Input, (b)HE, (c)GA, (d)GWA, (e)AWB, (f)LIME, (g)RetinexNet, (h)Dong, (i)KinD, (j)Ours.

4.3 Quantitative analysis

 We have also quantitatively compared our method with the several existing enhancement algorithms on the proposed dataset and synthetic dataset, respectively. For full-reference image quality assessment, we choose the metrics of PSNR and SSIM to compare the performance of different methods on the proposed dataset and synthesized dataset, while for no-reference image quality assessment, the metrics of Entropy, NIQE, and PIQE are utilized to analyze the result image. Besides, the runtime is used to evaluate the efficiency of our algorithm and the compared methods. The higher value of PSNR, SSIM, and Entropy illustrates the better quality of the enhanced image. However, for NIQE and PIQE, the lower the better.

 Table 1 and Table 2 report the average value of the 50 low-light test images on the above metrics. As shown in Table 1, we can observe that our method significantly has an advantage over all the compared methods on the proposed dataset, especially in PSNR, NIQE, and PIQE. For the synthesized dataset, our method seems to fall behind many methods in NIQE, as shown in Table 2. But in the rest of metrics, our method is obviously superior to other methods. Nonetheless, this also could validate that our method has an advantage approximately over the existing algorithms.

Table 1. Quantitative measurement results on proposed dataset.

E1KOBZ_2021_v15n5_1814_t0001.png 이미지

Table 2. Quantitative measurement results on synthesized dataset.

E1KOBZ_2021_v15n5_1814_t0002.png 이미지

To highlight the advantages of the proposed algorithm more intuitively, Fig. 7 reveals the average runtime of the 50 low-light test images based on the two datasets with different algorithms. The red line denotes the runtime of each method on the proposed dataset, while the black line represents the runtime of each method on the synthesized dataset. It is clear that our method acquires the optimal results on both datasets from Fig. 7, especially obviously superior to KinD. The advantages of our method are explains based on the above all experiments not only in qualitative but also in quantitative.

E1KOBZ_2021_v15n5_1814_f0007.png 이미지

Fig. 7. The runtime comparison of each method based on the proposed dataset and the synthesized dataset.

 Table 3 shows the detail comparison of the enhanced results and fusion result with the standard multi-exposure image sequence, which contains 16 images with different exposure time(seconds) of 32, 16, 8, 4, 2, 1, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256, 1/512 and 1/1024. In Table 3, the different enhanced images are obtained by utilizing the image with an exposure time of 1/4 as the input, while the fusion result is produced by fusing the above 16 images with different exposure time. Each enhanced image has a corresponding difference image, which is generated by taking the absolute value of the difference between the fusion result and the enhanced image.

Table 3. Detail comparison of the enhanced results of single image and fusion result. Input image with exposure time of 1/4 Fusion result

E1KOBZ_2021_v15n5_1814_t0003.png 이미지

 To highlight the advantages of the proposed algorithm, the mesh of corresponding difference images is also shown in Table 3. For the difference images, the darker of which indicates the result closer to the fusion result, while for the mesh figure, the lower the surface height of the difference image, the closer to the fusion result. We can observe that our method preserves more details compared with other algorithms from Table 3, which demonstrates that the proposed method can achieve the same effect by using single low-light image as the input instead of multi-exposure image sequence. Hence, our method can achieve the better results with the least number of images and improve the efficiency of the algorithm compared with the multi-exposure image fusion methods.

5. Conclusion

 The paper proposes a single low-light image enhancement method based on deep learning and Retinex, which firstly captures an illumination component to illumination component mapping to produce a predicted illumination component, and then the predicted illumination component and the reflectance component of the normal-light are fused to reconstruct the enhanced image. Meanwhile, the introduced super-resolution algorithm also enhances the details information of the images and reduce the artifacts of the result images, which achieves a better visual effect. Finally, a combination loss function is introduced to boost the effectiveness of the algorithm. Qualitatively and quantitatively experiment results on different datasets prove that our method is superior to existing methods at improving the lightness and color of the images, restoring the detailed information and runtime.

Acknowledgement

 This research was funded by the National Natural Science Foundation of China, grant numbers 61605175 and 61602423 and the Department of Science and Technology of Henan Province, China, grant numbers 192102210292, 182102110399 and 212102210427.

References

  1. Guo, C., et al., "Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement," in Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  2. Guo, L., et al., "Learned snakes for 3D image segmentation," Signal Processing, vol. 183, p. 108013, 2021. https://doi.org/10.1016/j.sigpro.2021.108013
  3. He, A., et al., "A twofold siamese network for real-time object tracking," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  4. Choi, J., et al., "Context-aware deep feature compression for high-speed visual tracking," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  5. Shah, K.I., et al., "Autonomous Parking-Lots Detection with Multi-Sensor Data Fusion Using Machine Deep Learning Techniques," Cmc -Tech Science Press-, vol. 66(2), pp. 1595-1612, 2021.
  6. Fan, D.-P., et al., "Concealed object detection," arXiv preprint arXiv:2102.10274, 2021.
  7. Kousik, N., et al., "Improved salient object detection using hybrid Convolution Recurrent Neural Network," Expert Systems with Applications, vol. 166, pp. 114064, 2021. https://doi.org/10.1016/j.eswa.2020.114064
  8. Zoran, D., et al., "Towards robust image classification using sequential attention models," in Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9483-8482, 2020.
  9. Boulila, W., et al., "RS-DCNN: A novel distributed convolutional-neural-networks basedapproach for big remote-sensing image classification," Computers and Electronics in Agriculture, vol. 182, pp. 106014, 2021. https://doi.org/10.1016/j.compag.2021.106014
  10. Wang, P., E. Fan, and P. Wang, "Comparative analysis of image classification algorithms based on traditional machine learning and deep learning," Pattern Recognition Letters, vol. 141, pp. 61-67, 2021. https://doi.org/10.1016/j.patrec.2020.07.042
  11. Li, J., et al., "Luminance-aware Pyramid Network for Low-light Image Enhancement," IEEE Transactions on Multimedia, 2020.
  12. Wang, S., et al., "Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images," IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, vol. 22(9), pp. 3538 - 3548, 2013. https://doi.org/10.1109/TIP.2013.2261309
  13. Land, Edwin, and H., "The Retinex Theory of Color Vision," Scientific American, 1977.
  14. Niu, Y., et al., "HDR-GAN: HDR image reconstruction from multi-exposed ldr images with large motions," IEEE Transactions on Image Processing, vol. 30, pp. 3885-3896, 2021. https://doi.org/10.1109/TIP.2021.3064433
  15. Alghamdi, M., et al., "Transfer Deep Learning for Reconfigurable Snapshot HDR Imaging Using Coded Masks," in Porc. of Computer Graphics Forum, Wiley Online Library, 2021.
  16. Jobson, D.J. and Z. Rahman, "Properties and performance of a center/surround retinex," IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, vol. 6(3), pp. 451-462, 1997. https://doi.org/10.1109/83.557356
  17. Rahman, Z., D.J. Jobson, and G.A. Woodell., "Multi-scale retinex for color image enhancement," in Proc. of 3rd IEEE International Conference on Image Processing, 2002.
  18. Song, X., et al., "Multi-scale joint network based on Retinex theory for low-light enhancement," Signal, Image and Video Processing, pp. 1-8, 2021.
  19. Xu, Y., et al., "A novel multi-scale fusion framework for detail-preserving low-light image enhancement," Information Sciences, vol. 548, pp. 378-397, 2021. https://doi.org/10.1016/j.ins.2020.09.066
  20. Lin, J., et al., "Fast multi-view image rendering method based on reverse search for matching," Optik, vol. 180, pp. 953-961, 2019. https://doi.org/10.1016/j.ijleo.2018.12.003
  21. Pei, B., Y. Peng, and Y. Luo, "A method of detecting defects of smart meter LCD screen based on LSD and deep learning," Multimedia Tools and Applications, pp. 1-18, 2021.
  22. Yang, et al., "Video super-resolution based on spatial-temporal recurrent residual networks," Computer Vision & Image Understanding, vol. 168, pp. 79-92, 2018. https://doi.org/10.1016/j.cviu.2017.09.002
  23. Wenhan, Y., et al., "Reference Guided Deep Super-Resolution via Manifold Localized External Compensation," IEEE Transactions on Circuits and Systems for Video Technology, vol. 29(5), pp. 1270-1283, 2019. https://doi.org/10.1109/tcsvt.2018.2838453
  24. Liu, J., et al., "Erase or Fill? Deep Joint Recurrent Rain Removal and Reconstruction in Videos," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
  25. Tan, R.T., "Attentive Generative Adversarial Network for Raindrop Removal from A Single Image," in Proc. of CVPR 2018, 2018.
  26. He, C., et al., "TSLRLN: Tensor subspace low-rank learning with non-local prior for hyperspectral image mixed denoising," Signal Processing, vol. 184, pp. 108060, 2021. https://doi.org/10.1016/j.sigpro.2021.108060
  27. Fu, L., et al., "Learning Robust Discriminant Subspace Based on Joint L2, p-and L2, s-Norm Distance Metrics," IEEE Transactions on Neural Networks and Learning Systems, 2020.
  28. Shen, L., et al., "Msr-net: Low-light image enhancement using deep convolutional network," arXiv preprint arXiv:1711.02488, 2017.
  29. Chen, C., et al., "Learning to See in the Dark," in Proc. of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
  30. Lai, W.-S., et al., "Fast and accurate image super-resolution with deep laplacian pyramid networks," IEEE transactions on pattern analysis and machine intelligence, vol. 41(11), pp. 2599-2613, 2019. https://doi.org/10.1109/tpami.2018.2865304
  31. Dong, X., Y.A. Pang, and J.G. Wen., "Fast efficient algorithm for enhancement of low lighting video," in Proc. of IEEE International Conference on Multimedia & Expo, 2011.
  32. Guo, X., Y. Li, and H. Ling, "LIME: Low-Light Image Enhancement via Illumination Map Estimation," IEEE Trans Image Process, vol. 26(2), pp. 982-993, 2017. https://doi.org/10.1109/TIP.2016.2639450
  33. Wei, C., et al., "Deep retinex decomposition for low-light enhancement," arXiv preprint arXiv:1808.04560, 2018.
  34. Zhang, Y., J. Zhang, and X. Guo, "Kindling the Darkness: A Practical Low-light Image Enhancer," in Proc. of the 27th ACM International Conference on Multimedia, pp. 1632-1640, 2019.
  35. Mittal, A., et al., "Making a "Completely Blind" Image Quality Analyzer," IEEE Signal Processing Letters, vol. 20(3), pp. 209-212, 2013. https://doi.org/10.1109/LSP.2012.2227726
  36. Venkatanath, N., et al., "Blind image quality evaluation using perception based features," in Proc. of 2015 Twenty First National Conference on Communications (NCC), IEEE, 2015.
  37. Cai, J., S. Gu, and L. Zhang, "Learning a Deep Single Image Contrast Enhancer from MultiExposure Images," IEEE Transactions on Image Processing, vol. 27(4), pp. 2049-2062, 2018. https://doi.org/10.1109/TIP.2018.2794218
  38. Li, L., et al., "A low-light image enhancement method for both denoising and contrast enlarging," in Proc. of IEEE International Conference on Image Processing, 2015.
  39. Celik, T. and T. Tjahjadi, "Contextual and Variational Contrast Enhancement," IEEE Transactions on Image Processing, vol. 20(12), pp. 3431-3441, 2011. https://doi.org/10.1109/TIP.2011.2157513
  40. Yang, W., et al., "From Fidelity to Perceptual Quality: A Semi-Supervised Approach for LowLight Image Enhancement," in Proc. of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3063-3072, 2020.
  41. Jobson, D.J. and Z. Rahman, "A multiscale retinex for bridging the gap between color images and the human observation of scenes," IEEE Transactions on Image Processing, vol. 6(7), pp. 965- 976, 1997. https://doi.org/10.1109/83.597272
  42. Ying, Z., et al., "A New Low-Light Image Enhancement Algorithm Using Camera Response Model," in Proc. of 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017.
  43. Wang, L.W., et al., "Lightening Network for Low-Light Image Enhancement," IEEE Transactions on Image Processing, vol. 29, pp. 7984-7996, 2020. https://doi.org/10.1109/TIP.2020.3008396
  44. Yang, J., et al., "Enhancement of Low Light Level Images with coupled dictionary learning," in Proc. of International Conference on Pattern Recognition, 2017.
  45. Lore, K.G., A. Akintayo, and S. Sarkar, "LLNet: A Deep Autoencoder Approach to Natural Lowlight Image Enhancement," Pattern Recognition, vol. 61, pp. 650-662, 2017. https://doi.org/10.1016/j.patcog.2016.06.008
  46. Ren, W., et al., "Low-Light Image Enhancement via a Deep Hybrid Network," IEEE Transactions on Image Processing, vol. 28(9), pp. 4364-4375, 2019. https://doi.org/10.1109/tip.2019.2910412
  47. Land, E.H.E., "Lightness and Retinex Theory," Journal of the Optical Society of America, vol. 61(1), pp. 1-11, 1971. https://doi.org/10.1364/JOSA.61.000001
  48. Lai, W.S., et al., "Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks," IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 41(11), pp. 2599-2613, 2019. https://doi.org/10.1109/tpami.2018.2865304
  49. Lv, F., et al., "MBLLEN: Low-Light Image/Video Enhancement Using CNNs," in Proc. of BMVC, 2018.