• Title/Summary/Keyword: Fusion Image

Search Result 878, Processing Time 0.251 seconds

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Image Registration and Fusion between Passive Millimeter Wave Images and Visual Images (수동형 멀리미터파 영상과 가시 영상과의 정합 및 융합에 관한 연구)

  • Lee, Hyoung;Lee, Dong-Su;Yeom, Seok-Won;Son, Jung-Young;Guschin, Vladmir P.;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6C
    • /
    • pp.349-354
    • /
    • 2011
  • Passive millimeter wave imaging has the capability of detecting concealed objects under clothing. Also, passive millimeter imaging can obtain interpretable images under low visibility conditions like rain, fog, smoke, and dust. However, the image quality is often degraded due to low spatial resolution, low signal level, and low temperature resolution. This paper addresses image registration and fusion between passive millimeter images and visual images. The goal of this study is to combine and visualize two different types of information together: human subject's identity and concealed objects. The image registration process is composed of body boundary detection and an affine transform maximizing cross-correlation coefficients of two edge images. The image fusion process comprises three stages: discrete wavelet transform for image decomposition, a fusion rule for merging the coefficients, and the inverse transform for image synthesis. In the experiments, various types of metallic and non-metallic objects such as a knife, gel or liquid type beauty aids and a phone are detected by passive millimeter wave imaging. The registration and fusion process can visualize the meaningful information from two different types of sensors.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Image Fusion for Improving Classification

  • Lee, Dong-Cheon;Kim, Jeong-Woo;Kwon, Jay-Hyoun;Kim, Chung;Park, Ki-Surk
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1464-1466
    • /
    • 2003
  • classification of the satellite images provides information about land cover and/or land use. Quality of the classification result depends mainly on the spatial and spectral resolutions of the images. In this study, image fusion in terms of resolution merging, and band integration with multi-source of the satellite images; Landsat ETM+ and Ikonos were carried out to improve classification. Resolution merging and band integration could generate imagery of high resolution with more spectral bands. Precise image co-registration is required to remove geometric distortion between different sources of images. Combination of unsupervised and supervised classification of the fused imagery was implemented to improve classification. 3D display of the results was possible by combining DEM with the classification result so that interpretability could be improved.

  • PDF

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image (캐스케이드 융합 기반 다중 스케일 열화상 향상 기법)

  • Kyung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.301-307
    • /
    • 2024
  • This study introduces a novel cascade fusion architecture aimed at enhancing thermal images across various scale conditions. The processing of thermal images at multiple scales has been challenging due to the limitations of existing methods that are designed for specific scales. To overcome these limitations, this paper proposes a unified framework that utilizes cascade feature fusion to effectively learn multi-scale representations. Confidence maps from different image scales are fused in a cascaded manner, enabling scale-invariant learning. The architecture comprises end-to-end trained convolutional neural networks to enhance image quality by reinforcing mutual scale dependencies. Experimental results indicate that the proposed technique outperforms existing methods in multi-scale thermal image enhancement. Performance evaluation results are provided, demonstrating consistent improvements in image quality metrics. The cascade fusion design facilitates robust generalization across scales and efficient learning of cross-scale representations.

Robust Image Fusion Using Stationary Wavelet Transform (정상 웨이블렛 변환을 이용한 로버스트 영상 융합)

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jea-Hyun;Ha, Hyun-Ho;Lim, Jin-Soo;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1181-1196
    • /
    • 2011
  • Image fusion is the process of combining information from two or more source images of a scene into a single composite image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and defense. The most common wavelet-based fusion is discrete wavelet transform fusion in which the high frequency sub-bands and low frequency sub-bands are combined on activity measures of local windows such standard deviation and mean, respectively. However, discrete wavelet transform is not translation-invariant and it often yields block artifacts in a fused image. In this paper, we propose a robust image fusion based on the stationary wavelet transform to overcome the drawback of discrete wavelet transform. We use the activity measure of interquartile range as the robust estimator of variance in high frequency sub-bands and combine the low frequency sub-band based on the interquartile range information present in the high frequency sub-bands. We evaluate our proposed method quantitatively and qualitatively for image fusion, and compare it to some existing fusion methods. Experimental results indicate that the proposed method is more effective and can provide satisfactory fusion results.

2D Face Image Recognition and Authentication Based on Data Fusion (데이터 퓨전을 이용한 얼굴영상 인식 및 인증에 관한 연구)

  • 박성원;권지웅;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.302-306
    • /
    • 2001
  • Because face Images have many variations(expression, illumination, orientation of face, etc), there has been no popular method which has high recognition rate. To solve this difficulty, data fusion that fuses various information has been studied. But previous research for data fusion fused additional biological informationUingerplint, voice, del with face image. In this paper, cooperative results from several face image recognition modules are fused without using additional biological information. To fuse results from individual face image recognition modules, we use re-defined mass function based on Dempster-Shafer s fusion theory.Experimental results from fusing several face recognition modules are presented, to show that proposed fusion model has better performance than single face recognition module without using additional biological information.

  • PDF