• Title/Summary/Keyword: Visible-infrared image fusion

Search Result 29, Processing Time 0.025 seconds

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

Infrared and Visible Image Fusion Based on NSCT and Deep Learning

  • Feng, Xin
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1405-1419
    • /
    • 2018
  • An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.

A Novel Image Dehazing Algorithm Based on Dual-tree Complex Wavelet Transform

  • Huang, Changxin;Li, Wei;Han, Songchen;Liang, Binbin;Cheng, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5039-5055
    • /
    • 2018
  • The quality of natural outdoor images captured by visible camera sensors is usually degraded by the haze present in the atmosphere. In this paper, a fast image dehazing method based on visible image and near-infrared fusion is proposed. In the proposed method, a visible and a near-infrared (NIR) image of the same scene is fused based on the dual-tree complex wavelet transform (DT-CWT) to generate a dehazed color image. The color of the fusion image is regulated through haze concentration estimated by dark channel prior (DCP). The experiment results demonstrate that the proposed method outperforms the conventional dehazing methods and effectively solves the color distortion problem in the dehazing process.

Two Scale Fusion Method of Infrared and Visible Images Using Saliency and Variance (현저성과 분산을 이용한 적외선과 가시영상의 2단계 스케일 융합방법)

  • Kim, Young Choon;Ahn, Sang Ho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.1951-1959
    • /
    • 2016
  • In this paper, we propose a two-scale fusion method for infrared and visible images using saliency and variance. The images are separated into two scales respectively: a base layer of low frequency component and a detailed layer of high frequency component. Then, these are synthesized using weight. The saliencies and the variances of the images are used as the fusion weights for the two-scale images. The proposed method is tested on several image pairs, and its performance is evaluated quantitatively by using objective fusion metrics.

Visible Image Enhancement Method Considering Thermal Information from Infrared Image (원적외선 영상의 열 정보를 고려한 가시광 영상 개선 방법)

  • Kim, Seonkeol;Kang, Hang-Bong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.550-558
    • /
    • 2013
  • The infrared and visible images are represented by different information due to the different wavelength of the light. The infrared image has thermal information and the visible image has texture information. Desirable results are obtained by fusing infrared and visible information. To enhance a visible image, we extract a weight map from a visible image using saturation, brightness. After that, the weight map is adjusted using thermal information in the infrared image. Finally, an enhanced image is resulted from combining an infrared image and a visible image. Our experiment results show that our proposed algorithm is working well to enhance the smoke in the original image.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

MOSAICFUSION: MERGING MODALITIES WITH PARTIAL DIFFERENTIAL EQUATION AND DISCRETE COSINE TRANSFORMATION

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of Applied and Pure Mathematics
    • /
    • v.5 no.5_6
    • /
    • pp.389-406
    • /
    • 2023
  • In the pursuit of enhancing image fusion techniques, this research presents a novel approach for fusing multimodal images, specifically infrared (IR) and visible (VIS) images, utilizing a combination of partial differential equations (PDE) and discrete cosine transformation (DCT). The proposed method seeks to leverage the thermal and structural information provided by IR imaging and the fine-grained details offered by VIS imaging create composite images that are superior in quality and informativeness. Through a meticulous fusion process, which involves PDE-guided fusion, DCT component selection, and weighted combination, the methodology aims to strike a balance that optimally preserves essential features and minimizes artifacts. Rigorous evaluations, both objective and subjective, are conducted to validate the effectiveness of the approach. This research contributes to the ongoing advancement of multimodal image fusion, addressing applications in fields like medical imaging, surveillance, and remote sensing, where the marriage of IR and VIS data is of paramount importance.

Effectiveness of Using the TIR Band in Landsat 8 Image Classification

  • Lee, Mi Hee;Lee, Soo Bong;Kim, Yongmin;Sa, Jiwon;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.3
    • /
    • pp.203-209
    • /
    • 2015
  • This paper discusses the effectiveness of using Landsat 8 TIR (Thermal Infrared) band images to improve the accuracy of landuse/landcover classification of urban areas. According to classification results for the study area using diverse band combinations, the classification accuracy using an image fusion process in which the TIR band is added to the visible and near infrared band was improved by 4.0%, compared to that using a band combination that does not consider the TIR band. For urban area landuse/landcover classification in particular, the producer’s accuracy and user’s accuracy values were improved by 10.2% and 3.8%, respectively. When MLC (Maximum Likelihood Classification), which is commonly applied to remote sensing images, was used, the TIR band images helped obtain a higher discriminant analysis in landuse/landcover classification.