• Title/Summary/Keyword: Haze Image

Search Result 75, Processing Time 0.025 seconds

A Framework for Object Detection by Haze Removal (안개 제거에 의한 객체 검출 성능 향상 방법)

  • Kim, Sang-Kyoon;Choi, Kyoung-Ho;Park, Soon-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.168-176
    • /
    • 2014
  • Detecting moving objects from a video sequence is a fundamental and critical task in video surveillance, traffic monitoring and analysis, and human detection and tracking. It is very difficult to detect moving objects in a video sequence degraded by the environmental factor such as fog. In particular, the color of an object become similar to the neighbor and it reduces the saturation, thus making it very difficult to distinguish the object from the background. For such a reason, it is shown that the performance and reliability of object detection and tracking are poor in the foggy weather. In this paper, we propose a novel method to improve the performance of object detection, combining a haze removal algorithm and a local histogram-based object tracking method. For the quantitative evaluation of the proposed system, information retrieval measurements, recall and precision, are used to quantify how well the performance is improved before and after the haze removal. As a result, the visibility of the image is enhanced and the performance of objects detection is improved.

An Experiment on Image Restoration Applying the Cycle Generative Adversarial Network to Partial Occlusion Kompsat-3A Image

  • Won, Taeyeon;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.33-43
    • /
    • 2022
  • This study presents a method to restore an optical satellite image with distortion and occlusion due to fog, haze, and clouds to one that minimizes degradation factors by referring to the same type of peripheral image. Specifically, the time and cost of re-photographing were reduced by partially occluding a region. To maintain the original image's pixel value as much as possible and to maintain restored and unrestored area continuity, a simulation restoration technique modified with the Cycle Generative Adversarial Network (CycleGAN) method was developed. The accuracy of the simulated image was analyzed by comparing CycleGAN and histogram matching, as well as the pixel value distribution, with the original image. The results show that for Site 1 (out of three sites), the root mean square error and R2 of CycleGAN were 169.36 and 0.9917, respectively, showing lower errors than those for histogram matching (170.43 and 0.9896, respectively). Further, comparison of the mean and standard deviation values of images simulated by CycleGAN and histogram matching with the ground truth pixel values confirmed the CycleGAN methodology as being closer to the ground truth value. Even for the histogram distribution of the simulated images, CycleGAN was closer to the ground truth than histogram matching.

Weather Classification and Image Restoration Algorithm Attentive to Weather Conditions in Autonomous Vehicles (자율주행 상황에서의 날씨 조건에 집중한 날씨 분류 및 영상 화질 개선 알고리듬)

  • Kim, Jaihoon;Lee, Chunghwan;Kim, Sangmin;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.60-63
    • /
    • 2020
  • With the advent of deep learning, a lot of attempts have been made in computer vision to substitute deep learning models for conventional algorithms. Among them, image classification, object detection, and image restoration have received a lot of attention from researchers. However, most of the contributions were refined in one of the fields only. We propose a new paradigm of model structure. End-to-end model which we will introduce classifies noise of an image and restores accordingly. Through this, the model enhances universality and efficiency. Our proposed model is an 'One-For-All' model which classifies weather condition in an image and returns clean image accordingly. By separating weather conditions, restoration model became more compact as well as effective in reducing raindrops, snowflakes, or haze in an image which degrade the quality of the image.

  • PDF

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

Jointly Learning of Heavy Rain Removal and Super-Resolution in Single Images

  • Vu, Dac Tung;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.113-117
    • /
    • 2020
  • Images were taken under various weather such as rain, haze, snow often show low visibility, which can dramatically decrease accuracy of some tasks in computer vision: object detection, segmentation. Besides, previous work to enhance image usually downsample the image to receive consistency features but have not yet good upsample algorithm to recover original size. So, in this research, we jointly implement removal streak in heavy rain image and super resolution using a deep network. We put forth a 2-stage network: a multi-model network followed by a refinement network. The first stage using rain formula in the single image and two operation layers (addition, multiplication) removes rain streak and noise to get clean image in low resolution. The second stage uses refinement network to recover damaged background information as well as upsample, and receive high resolution image. Our method improves visual quality image, gains accuracy in human action recognition task in datasets. Extensive experiments show that our network outperforms the state of the art (SoTA) methods.

  • PDF

Survey on Quantitative Performance Evaluation Methods of Image Dehazing (안개 제거 기술의 정량적인 성능 평가 기법 조사)

  • Lee, Sungmin;Yu, Jae Taeg;Jung, Seung-Won;Ra, Sung Woong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.12
    • /
    • pp.571-576
    • /
    • 2015
  • Image dehazing has been extensively studied, but the performance evaluation method for dehazing techniques has not attracted significant interest. This paper surveys many existing performance evaluation methods of image dehazing. In order to analyze the reliability of the evaluation methods, synthetic hazy images are first reconstructed using the ground-truth color and depth image pairs, and the dehazed images are then compared with the original haze-free images. Meanwhile we also evaluate dehazing algorithms not by the dehazed images' quality but by the performance of computer vision algorithms before/after applying image dehazing. All the aforementioned evaluation methods are analyzed and compared, and research direction for improving the existing methods is discussed.

PM2.5 Estimation Based on Image Analysis

  • Li, Xiaoli;Zhang, Shan;Wang, Kang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.907-923
    • /
    • 2020
  • For the severe haze situation in the Beijing-Tianjin-Hebei region, conventional fine particulate matter (PM2.5) concentration prediction methods based on pollutant data face problems such as incomplete data, which may lead to poor prediction performance. Therefore, this paper proposes a method of predicting the PM2.5 concentration based on image analysis technology that combines image data, which can reflect the original weather conditions, with currently popular machine learning methods. First, based on local parameter estimation, autoregressive (AR) model analysis and local estimation of the increase in image blur, we extract features from the weather images using an approach inspired by free energy and a no-reference robust metric model. Next, we compare the coefficient energy and contrast difference of each pixel in the AR model and then use the percentages to calculate the image sharpness to derive the overall mass fraction. Furthermore, the results are compared. The relationship between residual value and PM2.5 concentration is fitted by generalized Gauss distribution (GGD) model. Finally, nonlinear mapping is performed via the wavelet neural network (WNN) method to obtain the PM2.5 concentration. Experimental results obtained on real data show that the proposed method offers an improved prediction accuracy and lower root mean square error (RMSE).

A Study on Indoor Smoke Detection Based on Convolutional Neural Network Using Real Time Image Analysis (실시간 영상분석을 이용한 합성곱 신경망 기반의 실내 연기 감지 연구)

  • Ryu, Jin-Kyu;Kwak, Dong-Kurl;Lee, Bong-Seob;Kim, Dae-Hwan
    • Proceedings of the KIPE Conference
    • /
    • 2019.07a
    • /
    • pp.537-539
    • /
    • 2019
  • Recently, large-scale fires have been generated as urban buildings have become more and more density. Especially, the expansion of smoke in buildings due to high-rise is an problem, and the smoke is the main cause of death in fires. Therefore, in this paper, the image-based smoke detection is proposed through deep learning-based artificial intelligence techniques to prevent possible damage if existing detectors are not detected. In addition, the detection model was not configured simply through only the smoke data set, but the data set in the haze form was additionally composed together to compensate for the accuracy.

  • PDF

Single Image Dehazing Using Dark Channel Prior and Minimal Atmospheric Veil

  • Zhou, Xiao;Wang, Chengyou;Wang, Liping;Wang, Nan;Fu, Qiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.341-363
    • /
    • 2016
  • Haze or fog is a common natural phenomenon. In foggy weather, the captured pictures are difficult to be applied to computer vision system, such as road traffic detection, target tracking, etc. Therefore, the image dehazing technique has become a hotspot in the field of image processing. This paper presents an overview of the existing achievements on the image dehazing technique. The intent of this paper is not to review all the relevant works that have appeared in the literature, but rather to focus on two main works, that is, image dehazing scheme based on atmospheric veil and image dehazing scheme based on dark channel prior. After the overview and a comparative study, we propose an improved image dehazing method, which is based on two image dehazing schemes mentioned above. Our image dehazing method can obtain the fog-free images by proposing a more desirable atmospheric veil and estimating atmospheric light more accurately. In addition, we adjust the transmission of the sky regions and conduct tone mapping for the obtained images. Compared with other state of the art algorithms, experiment results show that images recovered by our algorithm are clearer and more natural, especially at distant scene and places where scene depth jumps abruptly.

Lightweight multiple scale-patch dehazing network for real-world hazy image

  • Wang, Juan;Ding, Chang;Wu, Minghu;Liu, Yuanyuan;Chen, Guanhai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4420-4438
    • /
    • 2021
  • Image dehazing is an ill-posed problem which is far from being solved. Traditional image dehazing methods often yield mediocre effects and possess substandard processing speed, while modern deep learning methods perform best only in certain datasets. The haze removal effect when processed by said methods is unsatisfactory, meaning the generalization performance fails to meet the requirements. Concurrently, due to the limited processing speed, most dehazing algorithms cannot be employed in the industry. To alleviate said problems, a lightweight fast dehazing network based on a multiple scale-patch framework (MSP) is proposed in the present paper. Firstly, the multi-scale structure is employed as the backbone network and the multi-patch structure as the supplementary network. Dehazing through a single network causes problems, such as loss of object details and color in some image areas, the multi-patch structure was employed for MSP as an information supplement. In the algorithm image processing module, the image is segmented up and down for processed separately. Secondly, MSP generates a clear dehazing effect and significant robustness when targeting real-world homogeneous and nonhomogeneous hazy maps and different datasets. Compared with existing dehazing methods, MSP demonstrated a fast inference speed and the feasibility of real-time processing. The overall size and model parameters of the entire dehazing model are 20.75M and 6.8M, and the processing time for the single image is 0.026s. Experiments on NTIRE 2018 and NTIRE 2020 demonstrate that MSP can achieve superior performance among the state-of-the-art methods, such as PSNR, SSIM, LPIPS, and individual subjective evaluation.