• Title/Summary/Keyword: Fusion Image

Search Result 878, Processing Time 0.027 seconds

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.

Modified Exposure Fusion with Improved Exposure Adjustment Using Histogram and Gamma Correction (히스토그램과 감마보정 기반의 노출 조정을 이용한 다중 노출 영상 합성 기법)

  • Park, Imjae;Park, Deajun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.327-338
    • /
    • 2017
  • Exposure fusion is a typical image fusion technique to generate a high dynamic range image by combining two or more different exposure images. In this paper, we propose block-based exposure adjustment considering unique characteristics of human visual system and improved saturation measure to get weight map. Proposed exposure adjustment artificially corrects intensity values of each input images considering human visual system, efficiently preserving details in the result image of exposure fusion. The improved saturation measure is used to make a weight map that effectively reflects the saturation region in the input images. We show the superiority of the proposed algorithm through subjective image quality, MEF-SSIM, and execution time comparison with the conventional exposure fusion algorithm.

Image Fusion Framework for Enhancing Spatial Resolution of Satellite Image using Structure-Texture Decomposition (구조-텍스처 분할을 이용한 위성영상 융합 프레임워크)

  • Yoo, Daehoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.21-29
    • /
    • 2019
  • This paper proposes a novel framework for image fusion of satellite imagery to enhance spatial resolution of the image via structure-texture decomposition. The resolution of the satellite imagery depends on the sensors, for example, panchromatic images have high spatial resolution but only a single gray band whereas multi-spectral images have low spatial resolution but multiple bands. To enhance the spatial resolution of low-resolution images, such as multi-spectral or infrared images, the proposed framework combines the structures from the low-resolution image and the textures from the high-resolution image. To improve the spatial quality of structural edges, the structure image from the low-resolution image is guided filtered with the structure image from the high-resolution image as the guidance image. The combination step is performed by pixel-wise addition of the filtered structure image and the texture image. Quantitative and qualitative evaluation demonstrate the proposed method preserves spectral and spatial fidelity of input images.

Deep Reference-based Dynamic Scene Deblurring

  • Cunzhe Liu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.653-669
    • /
    • 2024
  • Dynamic scene deblurring is a complex computer vision problem owing to its difficulty to model mathematically. In this paper, we present a novel approach for image deblurring with the help of the sharp reference image, which utilizes the reference image for high-quality and high-frequency detail results. To better utilize the clear reference image, we develop an encoder-decoder network and two novel modules are designed to guide the network for better image restoration. The proposed Reference Extraction and Aggregation Module can effectively establish the correspondence between blurry image and reference image and explore the most relevant features for better blur removal and the proposed Spatial Feature Fusion Module enables the encoder to perceive blur information at different spatial scales. In the final, the multi-scale feature maps from the encoder and cascaded Reference Extraction and Aggregation Modules are integrated into the decoder for a global fusion and representation. Extensive quantitative and qualitative experimental results from the different benchmarks show the effectiveness of our proposed method.

Ghost-free High Dynamic Range Imaging Based on Brightness Bitmap and Hue-angle Constancy (밝기 비트맵과 색도 일관성을 이용한 무 잔상 High Dynamic Range 영상 생성)

  • Yuan, Xi;Ha, Ho-Gun;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.111-120
    • /
    • 2015
  • HDR(High dynamic range) imaging is a technique to represent a dynamic range of real world. Exposure fusion is a method to obtain a pseudo-HDR image and it directly fuses multi-exposure images instead of generating the true-HDR image. However, it results ghost artifacts while fusing the multi-exposure images with moving objects. To solve this drawback, temporal consistency assessment is proposed to remove moving objects. Firstly, multi-level threshold bitmap and brightness bitmap are proposed. In addition, hue-angle constancy map between multi-exposure images is proposed for compensating a bitmap. Then, two bitmaps are combined as a temporal weight map. Spatial domain image quality assessment is used to generate a spatial weight map. Finally, two weight maps are applied at each multi-exposure image and combined to get the pseudo-HDR image. In experiments, the proposed method reduces ghost artifacts more than previous methods. The quantitative ghost-free evaluation of the proposed method is also less than others.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Image Retrieval Using the Fusion of Spatial Histogram and Wavelet Moments (공간 히스토그램과 웨이브릿 모멘트의 융합에 의한 영상검색)

  • 서상용;손재곤;김남철
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.11-14
    • /
    • 2000
  • We present an image retrieval method that improves retrieval rate by using the fusion of histogram and wavelet moment features. The key idea is that images similar to a query image are selected in DB by using the wavelet moment features. Then the result images are retrieved from the selected images by using histogram method. In order to evaluate the performance of the proposed method, we use Brodatz texture database, MPEG-7 T1 database and Corel Draw photo. Experimental result shows that the proposed method is better than each of histogram method and wavelet moment method.

  • PDF

EFFICIENT IHS BASED IMAGE FUSION WITH 'COMPENSATIVE' MATRIX CONSTRUCTED BY SIMULATING THE SCALING PROCESS

  • Nguyen, TienCuong;Kim, Dae-Sung;Kim, Yong-Il
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.639-642
    • /
    • 2006
  • The intensity-hue-saturation (IHS) technique has become a standard procedure in image analysis. It enhances the colour of highly correlated data. Unfortunately, IHS technique is sensitive to the properties of the analyzed area and usually faces colour distortion problems in the fused process. This paper explores the relationship of colour between before and after the fused process and the change in colour space of images. Subsequently, the fused colours are transformed back into the 'simulative' true colours by the following steps: (1) For each pixel of fused image that match with original pixel (of the coarse spectral resolution image) is transformed back to the true colour of original pixel. (2) The value for interpolating pixels is compensated to preserve the DN ratio between the original pixel and it's vicinity. The 'compensative matrix' is constructed by the DN of fused images and simulation of scaling process. An illustrative example of a Landsat and SPOT fused image also demonstrates the simulative true colour fusion methods.

  • PDF

The Dosimetric Effect on Real PTV and OARs at Various Image Fusion Protocol for Pituitary Adenomas (뇌하수체 종양의 방사선 수술 시 영상 융합 프로토콜이 실제 PTV와 OAR 선량에 미치는 영향)

  • Lee, Kyung-Nam;Lee, Dong-Joon;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.354-359
    • /
    • 2010
  • The purpose of this study is to verify the dosimetric effect on real PTV (planning target volume) coverage and safety of OARs (organs at risk) at various image fusion protocol-based radiosurgery plan for pituitary adenomas. Real PTV coverage and its variation was acquired and maximum dose and the volume absorbing above threshold dose were also measured for verifying the safety of optic pathway and brainstem. The protocol that can reduce superior-inferior uncertainty by using both axial and coronal MR (magnetic resonance) image sets shows relatively lower values than that of case using only axial image sets. As a result, the image fusion protocol with both axial and coronal image sets can be beneficial to generate OAR-weighted radiosurgery plan.