• Title/Summary/Keyword: Fusion Method

검색결과 1,958건 처리시간 0.024초

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

현저성과 분산을 이용한 적외선과 가시영상의 2단계 스케일 융합방법 (Two Scale Fusion Method of Infrared and Visible Images Using Saliency and Variance)

  • 김영춘;안상호
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1951-1959
    • /
    • 2016
  • In this paper, we propose a two-scale fusion method for infrared and visible images using saliency and variance. The images are separated into two scales respectively: a base layer of low frequency component and a detailed layer of high frequency component. Then, these are synthesized using weight. The saliencies and the variances of the images are used as the fusion weights for the two-scale images. The proposed method is tested on several image pairs, and its performance is evaluated quantitatively by using objective fusion metrics.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권8호
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Posterior Atalntoaxial Fusion with C1 Lateral Mass Screw and C2 Pedicle Screw Supplemented with Miniplate Fixation for Interlaminar Fusion : A Preliminary Report

  • Yoon, Sang-Mok;Baek, Jin-Wook;Kim, Dae-Hyun
    • Journal of Korean Neurosurgical Society
    • /
    • 제52권2호
    • /
    • pp.120-125
    • /
    • 2012
  • Objective : To investigate the feasibility of C1 lateral mass screw and C2 pedicle screw with polyaxial screw and rod system supplemented with miniplate for interlaminar fusion to treat various atlantoaxial instabilities. Methods : After posterior atlantoaxial fixation with lateral mass screw in the atlas and pedicle screw in the axis, we used 2 miniplates to fixate interlaminar iliac bone graft instead of sublaminar wiring. We performed this procedure in thirteen patients who had atlantoaxial instabilities and retrospectively evaluated the bone fusion rate and complications. Results : By using this method, we have achieved excellent bone fusion comparing with the result of other methods without any complications related to this procedure. Conclusion : C1 lateral mass screw and C2 pedicle screw with polyaxial screw and rod system supplemented with miniplate for interlaminar fusion may be an efficient alternative method to treat various atlantoaxial instabilities.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

MAD 순서통계량을 이용한 웨이블렛 변환기반 다중영상의 영상융합 및 JPEG2000 보드 구현 (Implementation of Wavelet Transform based Image Fusion and JPEG2000 using MAD Order Statistics for Multi-Image)

  • 이철
    • 한국정보통신학회논문지
    • /
    • 제17권11호
    • /
    • pp.2636-2644
    • /
    • 2013
  • 본 논문에서는 서로 다른 감지장치로부터 획득한, 특성이 상이한 다중영상인 가시광선 영상과 적외선 영상의 까다로운 영상융합을 수행할 수 있는 웨이블렛 기반 MAD순서통계량을 논의한다. 상이한 두 영상의 효과적인 영상융합을 위하여 근사부분대역의 웨이블렛 계수에 가중평균(Weighted average)법으로 융합처리하고 상세 부분대역의 웨이블렛 계수에 중앙절대편차(MAD: Median Absolute Deviation)를 이용한 임계값을 비교하여 두 영상의 장점만을 표현하는 방법을 제안한다. 특히 기존의 융합규칙들은 두 영상간의 화소나 지표 값의 대 소 관계에 의해 융합 영상이 이루어짐으로서 왜곡요소가 융합영상에 포함되어 왜곡된 융합영상을 얻을 가능성이 높다. 이러한 단점을 보완하기 위하여 제안 방법의 임계값은 잡음과 같은 왜곡요소를 배재하고 영상의 통계량을 고려하여 설정하였다. 다양한 다중영상을 기존 영상 융합 방법들과 비교하여 제안한 영상융합 방법의 우수성을 종합적 실험결과를 통하여 확인할 수 있었다. 제안된 방법은 실시간처리를 보장하기위하여 DSP와 FPGA를 이용한 하드웨어로 구현하였으며 Xilinx FPGA를 사용하였다.

IKONOS 영상을 이용한 수정된 a'trous 알고리즘 기반 웨이블릿 영상융합 기법 (Modified a'trous Algorithm based Wavelet Pan-sharpening Method Using IKONOS Image)

  • 김용현;최재완;김혜진;김용일
    • 대한토목학회논문집
    • /
    • 제29권2D호
    • /
    • pp.305-309
    • /
    • 2009
  • 영상융합의 목적은 다양한 영상으로부터의 정보를 하나의 영상으로 통합하는 것이다. 위성영상의 융합에 있어, 고해상도의 전정영상과 저해상도의 다중분광영상을 결합하기 위해 많은 영상 융합기법들이 제안되어왔으며, 융합결과의 공간적 세밀함과 분광정보 모두를 보존하는 것은 매우 중요하다. 웨이블릿 변환을 이용한 영상 융합기법은 분광정보의 보존 측면에서 다른 융합 기법에 비해 좋은 결과를 나타낸다. 본 연구에서는 IKONOS 영상을 이용한 수정된 a'trous 알고리즘 기반 웨이블릿 영상융합 기법을 제안한다. IKONOS 영상의 실험결과, 기존의 a'trous 알고리즘을 이용한 융합기법에 비해 공간적 세밀함과 분광정보의 보존측면에서 더 효과적인 기법임을 확인할 수 있었다.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • 제17권5호
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.