• 제목/요약/키워드: Multi-resolution Image fusion

검색결과 59건 처리시간 0.025초

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

Multi- Resolution MSS Image Fusion

  • Ghassemian, Hassan;Amidian, Asghar
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.648-650
    • /
    • 2003
  • Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on multirate image representation. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency bands, and the spatial features, edges, are distributed in the higher frequency bands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features to them, by a multirate filtering procedure. The proposed method is compared with some conventional methods. Results show it preserves more spectral features with less spatial distortion.

  • PDF

Image Fusion Methods for Multispectral and Panchromatic Images of Pleiades and KOMPSAT 3 Satellites

  • Kim, Yeji;Choi, Jaewan;Kim, Yongil
    • 한국측량학회지
    • /
    • 제36권5호
    • /
    • pp.413-422
    • /
    • 2018
  • Many applications using satellite data from high-resolution multispectral sensors require an image fusion step, known as pansharpening, before processing and analyzing the multispectral images when spatial fidelity is crucial. Image fusion methods are to improve images with higher spatial and spectral resolutions by reducing spectral distortion, which occurs on image fusion processing. The image fusion methods can be classified into MRA (Multi-Resolution Analysis) and CSA (Component Substitution Analysis) approaches. To suggest the efficient image fusion method for Pleiades and KOMPSAT (Korea Multi-Purpose Satellite) 3 satellites, this study will evaluate image fusion methods for multispectral and panchromatic images. HPF (High-Pass Filtering), SFIM (Smoothing Filter-based Intensity Modulation), GS (Gram Schmidt), and GSA (Adoptive GS) were selected for MRA and CSA based image fusion methods and applied on multispectral and panchromatic images. Their performances were evaluated using visual and quality index analysis. HPF and SFIM fusion results presented low performance of spatial details. GS and GSA fusion results had enhanced spatial information closer to panchromatic images, but GS produced more spectral distortions on urban structures. This study presented that GSA was effective to improve spatial resolution of multispectral images from Pleiades 1A and KOMPSAT 3.

구조-텍스처 분할을 이용한 위성영상 융합 프레임워크 (Image Fusion Framework for Enhancing Spatial Resolution of Satellite Image using Structure-Texture Decomposition)

  • 유대훈
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제25권3호
    • /
    • pp.21-29
    • /
    • 2019
  • 본 논문에서는 구조-텍스처 분할 기법을 기반으로 위성영상을 분할 융합하여 공간 해상도를 개선시키는 프레임워크를 제시한다. 위성영상은 센서가 감지하는 파장에 따라 다양한 공간해상도를 가진다. 전정 영상 (panchromatic image)은 일반적으로 높은 공간해상도를 가지지만 단일 흑백컬러를 가지고 있는 반면, 다중분광 영상 (multi-spectral image)나 적외선 영상은 전정 영상에 비해 낮은 공간해상도를 가지지만 다양한 분광 밴드정보와 열 정보를 가지고 있다. 본 논문에서는 다중분광 영상이나 적외선 영상의 공간 해상도를 향상시키기 위해 영상의 디테일이 텍스처 영상에만 존재한다는 것에 착안하여 본 프레임워크를 고안하였다. 고안된 프레임워크에서는 저해상도 영상과 고해상도 영상이 구조 영상과 텍스처 영상으로 분할된 뒤, 저해상도 구조영상은 고해상도 구조 영상을 참조하여 가이디드 필터링 된다. 구조-텍스처 영상 모델에 따라 필터링된 저해상도 영상의 구조 영역과 고해상도 영상의 텍스처 영역을 픽셀 단위로 더해져서 최종 영상이 생성된다. 생성된 영상은 저해상도 영상의 밴드와 고해상도 영상의 디테일을 포함한다. 제시하는 방법은 분광해상도와 공간해상도를 모두 보존할 수 있음을 실험적으로 확인하였다.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

식생지수 분석의 정확도 향상을 위한 영상융합의 가능성 평가 (Estimation of Probability of Image Fusion to Improve Accuracy of NDVI Analysis)

  • 송영선;손흥규;박정환
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2006년도 춘계학술발표회 논문집
    • /
    • pp.297-304
    • /
    • 2006
  • This paper estimates the probability of image fusion to improve accuracy of NDVl analysis. NDVI has been utilized in monitoring extensive forest or forest fire, and image fusion is a method to improve the resolution of multi-spectra image same resolution as high resolution panchromatic image. In this paper wavelet, PCA, IHS, Brovey and multiplicative method was applied to improve spatial resolution of SPOT-4 satellite image. NDVI images were generated from original and fused images and the correlation coefficient of fused and original image was calculated. The results of their comparison, PCA method showed best performance.

  • PDF

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

고해상도 SAR와 광학영상의 고주파 정보를 이용한 다중센서 융합 (Image Fusion of High Resolution SAR and Optical Image Using High Frequency Information)

  • 변영기;채태병
    • 한국측량학회지
    • /
    • 제30권1호
    • /
    • pp.75-86
    • /
    • 2012
  • SAR는 기상상태와 태양고도 제약을 받지 않고 영상을 취득할 수 있는 장점을 갖지만 광학영상에 비해 시각적 가독성이 떨어지는 단점을 갖는다. 광학영상의 다중분광정보를 융합하여 SAR 영상의 가독성을 향상시키기 위한 다중센서 융합기술에 대한 관심이 증대되고 있다. 본 연구에서는 고속 퓨리에 변환을 통한 고주파 정보 추출 및 이상치 제거과정을 통해 SAR 영상의 공간적 세밀함과 다중분광영상의 분광정보를 유지할 수 있는 새로운 다중센서 융합기술을 제안하였다. 실험데이터로는 KOMPSAT-5호와 동일한 고해상도 X-band SAR 시스템을 장착한 TerraSAR-X 영상과 KOMPSAT-2호의 다중분광영상을 사용하였다. 제안기법의 효용성을 평가하기 위해 기존에 위성영상융합에 많이 사용된 융합기법과의 시각적/정량적 비교평가를 수행하였다. 실험 결과 기존 영상융합알고리즘에 비해 분광정보 보존측면에서 보다 향상된 결과를 보임을 확인할 수 있었다.

Image Fusion for Improving Classification

  • Lee, Dong-Cheon;Kim, Jeong-Woo;Kwon, Jay-Hyoun;Kim, Chung;Park, Ki-Surk
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1464-1466
    • /
    • 2003
  • classification of the satellite images provides information about land cover and/or land use. Quality of the classification result depends mainly on the spatial and spectral resolutions of the images. In this study, image fusion in terms of resolution merging, and band integration with multi-source of the satellite images; Landsat ETM+ and Ikonos were carried out to improve classification. Resolution merging and band integration could generate imagery of high resolution with more spectral bands. Precise image co-registration is required to remove geometric distortion between different sources of images. Combination of unsupervised and supervised classification of the fused imagery was implemented to improve classification. 3D display of the results was possible by combining DEM with the classification result so that interpretability could be improved.

  • PDF

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.