• 제목/요약/키워드: Visible Image

검색결과 501건 처리시간 0.022초

원적외선 영상의 열 정보를 고려한 가시광 영상 개선 방법 (Visible Image Enhancement Method Considering Thermal Information from Infrared Image)

  • 김선걸;강행봉
    • 방송공학회논문지
    • /
    • 제18권4호
    • /
    • pp.550-558
    • /
    • 2013
  • 가시광 영상과 원적외선 영상은 각각 질감 정보와 열 정보를 가지므로 서로 다른 정보를 표현한다. 그러므로 가시광 영상 개선을 위해 가시광 영상의 정보만을 이용하는 것보다 가시광 영상에서 존재하지 않는 원적외선 영상의 열 정보를 이용하는 것이 보다 좋은 결과를 얻을 수 있다. 본 논문에서는 원적외선 영상을 이용한 효과적인 가시광 영상 개선을 위해 가시광 영상에서 개선이 필요한 정도에 따라 가중치 맵을 만든다. 가중치 맵은 채도와 밝기를 이용하여 계산하며 원적외선 영상에서 열 정보를 고려하여 값을 조정한다. 마지막으로 조정된 가중치 맵을 이용하여 원적외선 영상의 정보와 가시광 영상의 정보를 융합함으로써 두 영상의 정보를 효과적으로 포함한 결과 영상을 생성한다. 실험결과에서는 가시광 영상에서 개선이 필요한 영역을 원적외선 영상 정보와의 융합으로 원본의 가시광 영상보다 향상된 결과를 보여준다.

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

라플라시안 피라미드와 주성분 분석을 이용한 가시광과 적외선 영상 합성 (Visible and NIR Image Synthesis Using Laplacian Pyramid and Principal Component Analysis)

  • 손동민;권혁주;이성학
    • 센서학회지
    • /
    • 제29권2호
    • /
    • pp.133-140
    • /
    • 2020
  • This study proposes a method of blending visible and near infrared images to enhance edge details and local contrast. The proposed method consists of radiance map generation and color compensation. The radiance map is produced by a Laplacian pyramid and a soft mixing method based on principal component analysis. The color compensation method uses the ratio between the composed radiance map and the luminance channel of a visible image to preserve the visible image chrominance. The proposed method has better edge details compared to a conventional visible and NIR image blending method.

Reversible Multipurpose Watermarking Algorithm Using ResNet and Perceptual Hashing

  • Mingfang Jiang;Hengfu Yang
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.756-766
    • /
    • 2023
  • To effectively track the illegal use of digital images and maintain the security of digital image communication on the Internet, this paper proposes a reversible multipurpose image watermarking algorithm based on a deep residual network (ResNet) and perceptual hashing (also called MWR). The algorithm first combines perceptual image hashing to generate a digital fingerprint that depends on the user's identity information and image characteristics. Then it embeds the removable visible watermark and digital fingerprint in two different regions of the orthogonal separation of the image. The embedding strength of the digital fingerprint is computed using ResNet. Because of the embedding of the removable visible watermark, the conflict between the copyright notice and the user's browsing is balanced. Moreover, image authentication and traitor tracking are realized through digital fingerprint insertion. The experiments show that the scheme has good visual transparency and watermark visibility. The use of chaotic mapping in the visible watermark insertion process enhances the security of the multipurpose watermark scheme, and unauthorized users without correct keys cannot effectively remove the visible watermark.

Infrared and Visible Image Fusion Based on NSCT and Deep Learning

  • Feng, Xin
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1405-1419
    • /
    • 2018
  • An image fusion method is proposed on the basis of depth model segmentation to overcome the shortcomings of noise interference and artifacts caused by infrared and visible image fusion. Firstly, the deep Boltzmann machine is used to perform the priori learning of infrared and visible target and background contour, and the depth segmentation model of the contour is constructed. The Split Bregman iterative algorithm is employed to gain the optimal energy segmentation of infrared and visible image contours. Then, the nonsubsampled contourlet transform (NSCT) transform is taken to decompose the source image, and the corresponding rules are used to integrate the coefficients in the light of the segmented background contour. Finally, the NSCT inverse transform is used to reconstruct the fused image. The simulation results of MATLAB indicates that the proposed algorithm can obtain the fusion result of both target and background contours effectively, with a high contrast and noise suppression in subjective evaluation as well as great merits in objective quantitative indicators.

얼굴영상과 예측한 열 적외선 텍스처의 융합에 의한 얼굴 인식 (Design of an observer-based decentralized fuzzy controller for discrete-time interconnected fuzzy systems)

  • 공성곤
    • 한국지능시스템학회논문지
    • /
    • 제25권5호
    • /
    • pp.437-443
    • /
    • 2015
  • 이 논문에서는 가시광선 얼굴영상과 그로부터 예측한 열 적외선 텍스처의 데이터 융합에 의한 얼굴인식 방법에 관하여 연구하였다. 제안하는 얼굴인식 기법은 가시광선 얼굴영상과 열 적외선 텍스처를 PCA에 의하여 낮은 차원의 특징공간에서 특징벡터로 변환한 다음, 다층 신경회로망을 사용하여 가시광선 영상 특징으로부터 얼굴의 열적외선 특징을 예측하여 열 적외선 텍스처를 생성하였다. 학습과정에서는 주어진 개체로부터 획득한 한 쌍의 가시광선 및 열 적외선 영상에 대해서 PCA를 이용하여 낮은 차원의 특징공간으로 변환한 다음, 가시광선 영상특징으로부터 열 분포 특징으로 매핑시키는 비선형 함수에 해당하는 신경회로망의 내부 파라미터를 결정한다. 학습된 신경회로망은 입력 가시광선 얼굴 특징으로부터 열 에너지 분포 특성의 PCA계수를 예측하고, 이로부터 열 적외선 텍스처를 생성한다. 대표적인 두 가지 얼굴인식 알고리즘 Eigenfaces와 Fisherfaces을 사용하여 NIST/Equinox 데이터베이스에 대하여 얼굴인식에 관한 실험을 수행하였다. 예측한 열 적외선 텍스처와 가시광선 얼굴영상의 데이터 융합결과는 가시광선 얼굴영상만을 사용한 경우에 비해서 얼굴인식의 성능이 개선되었음을 수신자 조작특성 (ROC) 및 첫 번째 매칭성능에 의하여 검증하였다.

A Novel Image Dehazing Algorithm Based on Dual-tree Complex Wavelet Transform

  • Huang, Changxin;Li, Wei;Han, Songchen;Liang, Binbin;Cheng, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.5039-5055
    • /
    • 2018
  • The quality of natural outdoor images captured by visible camera sensors is usually degraded by the haze present in the atmosphere. In this paper, a fast image dehazing method based on visible image and near-infrared fusion is proposed. In the proposed method, a visible and a near-infrared (NIR) image of the same scene is fused based on the dual-tree complex wavelet transform (DT-CWT) to generate a dehazed color image. The color of the fusion image is regulated through haze concentration estimated by dark channel prior (DCP). The experiment results demonstrate that the proposed method outperforms the conventional dehazing methods and effectively solves the color distortion problem in the dehazing process.

Fast Generation of Stereoscopic Virtual Environment Display Using P-buffer

  • Heo, Jun-Hyeok;Jung, Soon-Ki;Wohn, Kwang-Yun
    • Journal of Electrical Engineering and information Science
    • /
    • 제3권2호
    • /
    • pp.202-210
    • /
    • 1998
  • This paper is concerned with an efficient generation of stereoscopic views for complex virtual environments by exploiting frame coherence in visibility. The basic idea is to keep visible polygons throughout the rendering process. P-buffer, a buffer of image size, holds the id of the visible polygon for each pixel. This contrasts to the frame buffer and the Z-buffer which hold the color information and the depth information, respectively. For the generation of a consecutive image, the position and the orientation of the visible polygons in the current view are updated according to the viewer's movements, and re-rendered on the current image under the assumption that, when the viewer moves slightly, the visibility of polygons remains unchanged. In the case of stereoscopic views, it may not introduce much difficulty when we render the right(left) image using visible polygons on the (right) image only, The less difference in two images is, the easier the matching becomes in perceiving depth. Some psychophysical experiments have been conducted to support this claim. The computational complexity for generating a fight(left) image from the previous left(right) image is bounded by the size of image space, and accordingly. It is somewhat independent of the complexity of the 3-D scene.

  • PDF

Deep Facade Parsing with Occlusions

  • Ma, Wenguang;Ma, Wei;Xu, Shibiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.524-543
    • /
    • 2022
  • Correct facade image parsing is essential to the semantic understanding of outdoor scenes. Unfortunately, there are often various occlusions in front of buildings, which fails many existing methods. In this paper, we propose an end-to-end deep network for facade parsing with occlusions. The network learns to decompose an input image into visible and invisible parts by occlusion reasoning. Then, a context aggregation module is proposed to collect nonlocal cues for semantic segmentation of the visible part. In addition, considering the regularity of man-made buildings, a repetitive pattern completion branch is designed to infer the contents in the invisible regions by referring to the visible part. Finally, the parsing map of the input facade image is generated by fusing the results of the visible and invisible results. Experiments on both synthetic and real datasets demonstrate that the proposed method outperforms state-of-the-art methods in parsing facades with occlusions. Moreover, we applied our method in applications of image inpainting and 3D semantic modeling.