• Title/Summary/Keyword: Low-light image

Search Result 305, Processing Time 0.024 seconds

Single Low-Light Ghost-Free Image Enhancement via Deep Retinex Model

  • Liu, Yan;Lv, Bingxue;Wang, Jingwen;Huang, Wei;Qiu, Tiantian;Chen, Yunzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1814-1828
    • /
    • 2021
  • Low-light image enhancement is a key technique to overcome the quality degradation of photos taken under scotopic vision illumination conditions. The degradation includes low brightness, low contrast, and outstanding noise, which would seriously affect the vision of the human eye recognition ability and subsequent image processing. In this paper, we propose an approach based on deep learning and Retinex theory to enhance the low-light image, which includes image decomposition, illumination prediction, image reconstruction, and image optimization. The first three parts can reconstruct the enhanced image that suffers from low-resolution. To reduce the noise of the enhanced image and improve the image quality, a super-resolution algorithm based on the Laplacian pyramid network is introduced to optimize the image. The Laplacian pyramid network can improve the resolution of the enhanced image through multiple feature extraction and deconvolution operations. Furthermore, a combination loss function is explored in the network training stage to improve the efficiency of the algorithm. Extensive experiments and comprehensive evaluations demonstrate the strength of the proposed method, the result is closer to the real-world scene in lightness, color, and details. Besides, experiments also demonstrate that the proposed method with the single low-light image can achieve the same effect as multi-exposure image fusion algorithm and no ghost is introduced.

Preprocessing for High Quality Real-time Imaging Systems by Low-light Stretch Algorithm

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.585-589
    • /
    • 2018
  • Consumer demand for high quality image/video services led to growing trend in image quality enhancement study. Therefore, recent years was a period of substantial progress in this research field. Through careful observation of the image quality after processing by image enhancement algorithms, we perceived that the dark region in the image usually suffered loss of contrast to a certain extent. In this paper, the low-light stretch preprocessing algorithm is, hence, proposed to resolve the aforementioned issue. The proposed approach is evaluated qualitatively and quantitatively against the well-known histogram equalization and Photoshop curve adjustment. The evaluation results validate the efficiency and superiority of the low-light stretch over the benchmarking methods. In addition, we also propose the 255MHz-capable hardware implementation to ease the process of incorporating low-light stretch into real-time imaging systems, such as aerial surveillance and monitoring with drones and driving aiding systems.

Pixel-Wise Polynomial Estimation Model for Low-Light Image Enhancement

  • Muhammad Tahir Rasheed;Daming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2483-2504
    • /
    • 2023
  • Most existing low-light enhancement algorithms either use a large number of training parameters or lack generalization to real-world scenarios. This paper presents a novel lightweight and robust pixel-wise polynomial approximation-based deep network for low-light image enhancement. For mapping the low-light image to the enhanced image, pixel-wise higher-order polynomials are employed. A deep convolution network is used to estimate the coefficients of these higher-order polynomials. The proposed network uses multiple branches to estimate pixel values based on different receptive fields. With a smaller receptive field, the first branch enhanced local features, the second and third branches focused on medium-level features, and the last branch enhanced global features. The low-light image is downsampled by the factor of 2b-1 (b is the branch number) and fed as input to each branch. After combining the outputs of each branch, the final enhanced image is obtained. A comprehensive evaluation of our proposed network on six publicly available no-reference test datasets shows that it outperforms state-of-the-art methods on both quantitative and qualitative measures.

A Study on Low-Light Image Enhancement Technique for Improvement of Object Detection Accuracy in Construction Site (건설현장 내 객체검출 정확도 향상을 위한 저조도 영상 강화 기법에 관한 연구)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.208-217
    • /
    • 2024
  • There is so much research effort for developing and implementing deep learning-based surveillance systems to manage health and safety issues in construction sites. Especially, the development of deep learning-based object detection in various environmental changes has been progressing because those affect decreasing searching performance of the model. Among the various environmental variables, the accuracy of the object detection model is significantly dropped under low illuminance, and consistent object detection accuracy cannot be secured even the model is trained using low-light images. Accordingly, there is a need of low-light enhancement to keep the performance under low illuminance. Therefore, this paper conducts a comparative study of various deep learning-based low-light image enhancement models (GLADNet, KinD, LLFlow, Zero-DCE) using the acquired construction site image data. The low-light enhanced image was visually verified, and it was quantitatively analyzed by adopting image quality evaluation metrics such as PSNR, SSIM, Delta-E. As a result of the experiment, the low-light image enhancement performance of GLADNet showed excellent results in quantitative and qualitative evaluation, and it was analyzed to be suitable as a low-light image enhancement model. If the low-light image enhancement technique is applied as an image preprocessing to the deep learning-based object detection model in the future, it is expected to secure consistent object detection performance in a low-light environment.

Image Enhancement of Image Intensifying Device in Extremely Low-Light Levels using Multiple Filters and Anisotropic Diffusion (다중필터와 이방성 확산을 이용한 극 저조도 조건에서의 미광증폭장비 영상 개선)

  • Moon, Jin-Kyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.7
    • /
    • pp.36-41
    • /
    • 2018
  • An image intensifying device is equipment that makes weak objects visible in a dark environment, such as making nighttime bright enough to let objects be visually observed. It is possible to obtain a clear image by amplifying the light in the presence of a certain amount of weak light. However, in an extremely low-light environment, where even moonlight is not present, there is not enough light to amplify anything, and the sharpness of the screen deteriorates. In this paper, a method is proposed to improve image quality by using multiple filters and anisotropic diffusion for output noise of the image-intensifying device in extreme low-light environments. For the experiment, the output of the image-intensifying device was obtained under extremely low-light conditions, and signal processing for improving the image quality was performed. The configuration of the filters for signal processing uses anisotropic diffusion after applying a median filter and a Wiener filter for effective removal of salt-and-pepper noise and Gaussian noise, which constitute the main noise appearing in the image. Experimental results show that the improvement visually enhanced image quality. Both peak signal-to-noise ratio (PSNR) and SSIM, which are quantitative indicators, show improved values.

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Deep Learning-Based Low-Light Imaging Considering Image Signal Processing

  • Minsu, Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.19-25
    • /
    • 2023
  • In this paper, we propose a method for improving raw images captured in a low light condition based on deep learning considering the image signal processing. In the case of a smart phone camera, compared to a DSLR camera, the size of a lens or sensor is limited, so the noise increases and the reduces the quality of images in low light conditions. Existing deep learning-based low-light image processing methods create unnatural images in some cases since they do not consider the lens shading effect and white balance, which are major factors in the image signal processing. In this paper, pixel distances from the image center and channel average values are used to consider the lens shading effect and white balance with a deep learning model. Experiments with low-light images taken with a smart phone demonstrate that the proposed method achieves a higher peak signal to noise ratio and structural similarity index measure than the existing method by creating high-quality low-light images.

Metrics for Low-Light Image Quality Assessment

  • Sangmin Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.11-19
    • /
    • 2023
  • In this paper, it is confirmed that the metrics used to evaluate image quality can be applied to low-light images. Due to the nature of low-illumination images, factors related to light create various noise patterns, and the smaller the amount of light, the more severe the noise. Therefore, in situations where it is difficult to obtain a clean image without noise, the quality of a low-illuminance image from which noise has been removed is often judged by the human eye. In this paper, noise in low-illuminance images for which ground truth cannot be obtained is removed using Noise2Noise, and spatial resolution and radial resolution are evaluated using ISO 12233 charts and colorchecker as metrics such as MTF and SNR. It can be shown that the quality of the low-illuminance image, which has been evaluated mainly for qualitative evaluation, can also be evaluated quantitatively.

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Low-Light Invariant Video Enhancement Scheme Using Zero Reference Deep Curve Estimation (Zero Deep Curve 추정방식을 이용한 저조도에 강인한 비디오 개선 방법)

  • Choi, Hyeong-Seok;Yang, Yoon Gi
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.991-998
    • /
    • 2022
  • Recently, object recognition using image/video signals is rapidly spreading on autonomous driving and mobile phones. However, the actual input image/video signals are easily exposed to a poor illuminance environment. A recent researches for improving illumination enable to estimate and compensate the illumination parameters. In this study, we propose VE-DCE (video enhancement zero-reference deep curve estimation) to improve the illumination of low-light images. The proposed VE-DCE uses unsupervised learning-based zero-reference deep curve, which is one of the latest among learning based estimation techniques. Experimental results show that the proposed method can achieve the quality of low-light video as well as images compared to the previous method. In addition, it can reduce the computational complexity with respect to the existing method.