• Title, Summary, Keyword: Satellite image fusion

Search Result 108, Processing Time 0.041 seconds

Generalized IHS-Based Satellite Imagery Fusion Using Spectral Response Functions

  • Kim, Yong-Hyun;Eo, Yang-Dam;Kim, Youn-Soo;Kim, Yong-Il
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.497-505
    • /
    • 2011
  • Image fusion is a technical method to integrate the spatial details of the high-resolution panchromatic (HRP) image and the spectral information of low-resolution multispectral (LRM) images to produce high-resolution multispectral images. The most important point in image fusion is enhancing the spatial details of the HRP image and simultaneously maintaining the spectral information of the LRM images. This implies that the physical characteristics of a satellite sensor should be considered in the fusion process. Also, to fuse massive satellite images, the fusion method should have low computation costs. In this paper, we propose a fast and efficient satellite image fusion method. The proposed method uses the spectral response functions of a satellite sensor; thus, it rationally reflects the physical characteristics of the satellite sensor to the fused image. As a result, the proposed method provides high-quality fused images in terms of spectral and spatial evaluations. The experimental results of IKONOS images indicate that the proposed method outperforms the intensity-hue-saturation and wavelet-based methods.

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

Image Fusion and Evaluation by using Mapping Satellite-1 Data

  • Huang, He;Hu, Yafei;Feng, Yi;Zhang, Meng;Song, DongSeob
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_2
    • /
    • pp.593-599
    • /
    • 2013
  • China's Mapping Satellite-1, developed by the China Aerospace Science and Technology Corporation (CASC), was launched in three years ago. The data from Mapping Satellite-1 are able to use for efficient surveying and geometric mapping application field. In this paper, we fuse the panchromatic and multispectral images of Changchun area, which are obtained from the Mapping Satellite-1, the one that is the Chinese first transmission-type three-dimensional mapping satellite. The four traditional image fusion methods, which are HPF, Mod.IHS, Panshar and wavelet transform, were used to approach for effectively fusing Mapping Satellite-1 remote sensing data. Subsequently we assess the results with some commonly used methods, which are known a subjective qualitative evaluation and quantitative statistical analysis approach. Consequently, we found that the wavelet transform remote sensing image fusion is the optimal in the degree of distortion, the ability of performance of details and image information availability among four methods. To further understand the optimal methods to fuse Mapping Satellite-1 images, an additional study is necessary.

Image Fusion Methods for Multispectral and Panchromatic Images of Pleiades and KOMPSAT 3 Satellites

  • Kim, Yeji;Choi, Jaewan;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.413-422
    • /
    • 2018
  • Many applications using satellite data from high-resolution multispectral sensors require an image fusion step, known as pansharpening, before processing and analyzing the multispectral images when spatial fidelity is crucial. Image fusion methods are to improve images with higher spatial and spectral resolutions by reducing spectral distortion, which occurs on image fusion processing. The image fusion methods can be classified into MRA (Multi-Resolution Analysis) and CSA (Component Substitution Analysis) approaches. To suggest the efficient image fusion method for Pleiades and KOMPSAT (Korea Multi-Purpose Satellite) 3 satellites, this study will evaluate image fusion methods for multispectral and panchromatic images. HPF (High-Pass Filtering), SFIM (Smoothing Filter-based Intensity Modulation), GS (Gram Schmidt), and GSA (Adoptive GS) were selected for MRA and CSA based image fusion methods and applied on multispectral and panchromatic images. Their performances were evaluated using visual and quality index analysis. HPF and SFIM fusion results presented low performance of spatial details. GS and GSA fusion results had enhanced spatial information closer to panchromatic images, but GS produced more spectral distortions on urban structures. This study presented that GSA was effective to improve spatial resolution of multispectral images from Pleiades 1A and KOMPSAT 3.

Potential for Image Fusion Quality Improvement through Shadow Effects Correction (그림자효과 보정을 통한 영상융합 품질 향상 가능성)

  • 손홍규;윤공현
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • /
    • pp.397-402
    • /
    • 2003
  • This study is aimed to improve the quality of image fusion results through shadow effects correction. For this, shadow effects correction algorithm is proposed and visual comparisons have been made to estimate the quality of image fusion results. The following four steps have been performed to improve the image fusion qualify First, the shadow regions of satellite image are precisely located. Subsequently, segmentation of context regions is manually implemented for accurate correction. Next step, to calculate correction factor we compared the context region with the same non-shadow context region. Finally, image fusion is implemented using collected images. The result presented here helps to accurately extract and interpret geo-spatial information from satellite imagery.

  • PDF

Fast and Efficient Satellite Imagery Fusion Using DT-CWT Proportional and Wavelet Zero-Padding

  • Kim, Yong-Hyun;Oh, Jae-Hong;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.6
    • /
    • pp.517-526
    • /
    • 2015
  • Among the various image fusion or pan-sharpening methods, those wavelet-based methods provide superior radiometric quality. However, the fusion processing is not only simple but also flexible, since many low- and high-frequency sub-bands are often produced in the wavelet domain. To address this issue, a novel DT-CWT (Dual-Tree Complex Wavelet Transform) proportional to the fusion method by a WZP (Wavelet Zero-Padding) is proposed. The proposed method produces a single high-frequency image in the spatial domain that is injected into the LRM (Low-Resolution Multispectral) image. Thus, a wavelet domain fusion can be simplified to spatial domain fusion. In addition, in the proposed DT-CWTP (DT-CWT Proportional) fusion method, it is unnecessary to decompose the LRM image by adopting WZP. The comparison indicates that the proposed fusion method is nearly five times faster than the DT-CWT with SW (Substitute-Wavelet) fusion method, meanwhile simultaneously maintaining the radiometric quality. The conducted experiments with WorldView-2 satellite images demonstrated promising results with the computation efficiency and fused image quality.

Estimation of Global Image Fusion Parameters for KOMPSAT-3A: Application to Korean Peninsula (아리랑 3A호의 글로벌 융합 파라미터 추정방법: 한반도 영역을 대상으로)

  • Park, Sung-Hwan;Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_4
    • /
    • pp.1363-1372
    • /
    • 2019
  • In this study, we tried to analyze the fusion parameters required to produce a high-resolution multispectral image using an image fusion technique and to suggest global fusion parameters. We analyzed the linear regression coefficients that can simulate the panchromatic image, and the fusion coefficients required for producing the fusion image. When the fusion images were produced using the representative fusion parameters, it was confirmed that the difference in DN value between each fusion image was quantitatively smaller than when the optimal fusion parameters were used. Therefore, this study can minimize the regional characteristics reflected in the fused image.

Semi-Automated Extraction of Geographic Information using KOMPSAT 2 : Analyzing Image Fusion Methods and Geographic Objected-Based Image Analysis (다목적 실용위성 2호 고해상도 영상을 이용한 지리 정보 추출 기법 - 영상융합과 지리객체 기반 분석을 중심으로 -)

  • Yang, Byung-Yun;Hwang, Chul-Sue
    • Journal of the Korean Geographical Society
    • /
    • v.47 no.2
    • /
    • pp.282-296
    • /
    • 2012
  • This study compared effects of spatial resolution ratio in image fusion by Korea Multi-Purpose SATellite 2 (KOMPSAT II), also known as Arirang-2. Image fusion techniques, also called pansharpening, are required to obtain color imagery with high spatial resolution imagery using panchromatic and multi-spectral images. The higher quality satellite images generated by an image fusion technique enable interpreters to produce better application results. Thus, image fusions categorized in 3 domains were applied to find out significantly improved fused images using KOMPSAT 2. In addition, all fused images were evaluated to satisfy both spectral and spatial quality to investigate an optimum fused image. Additionally, this research compared Pixel-Based Image Analysis (PBIA) with the GEOgraphic Object-Based Image Analysis (GEOBIA) to make better classification results. Specifically, a roof top of building was extracted by both image analysis approaches and was finally evaluated to obtain the best accurate result. This research, therefore, provides the effective use for very high resolution satellite imagery with image interpreter to be used for many applications such as coastal area, urban and regional planning.

  • PDF

Data Fusion Using Image Segmentation in High Spatial Resolution Satellite Imagery

  • Lee, Jong-Yeol
    • Proceedings of the KSRS Conference
    • /
    • /
    • pp.283-285
    • /
    • 2003
  • This paper describes a data fusion method for high spatial resolution satellite imagery. The pixels located around an object edge have spectral mixing because of the geometric primitive of pixel. The larger a size of pixel is, the wider an area of spectral mixing is. The intensity of pixels adjacent edges were modified by the spectral characteristics of the pixels located inside of objects. The methods developed in this study were tested using IKONOS Multispectral and Pan data of a part of Jeju-shi in Korea. The test application shows that the spectral information of the pixels adjacent edges were improved well.

  • PDF

Reconstruction of Buildings from Satellite Image and LIDAR Data

  • Guo, T.;Yasuoka, Y.
    • Proceedings of the KSRS Conference
    • /
    • /
    • pp.519-521
    • /
    • 2003
  • Within the paper an approach for the automatic extraction and reconstruction of buildings in urban built-up areas base on fusion of high-resolution satellite image and LIDAR data is presented. The presented data fusion scheme is essentially motivated by the fact that image and range data are quite complementary. Raised urban objects are first segmented from the terrain surface in the LIDAR data by making use of the spectral signature derived from satellite image, afterwards building potential regions are initially detected in a hierarchical scheme. A novel 3D building reconstruction model is also presented based on the assumption that most buildings can be approximately decomposed into polyhedral patches. With the constraints of presented building model, 3D edges are used to generate the hypothesis and follow the verification processes and a subsequent logical processing of the primitive geometric patches leads to 3D reconstruction of buildings with good details of shape. The approach is applied on the test sites and shows a good performance, an evaluation is described as well in the paper.

  • PDF