• Title/Summary/Keyword: Fusion Image

Search Result 878, Processing Time 0.034 seconds

Visual Control of Mobile Robots Using Multisensor Fusion System

  • Kim, Jung-Ha;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.4-91
    • /
    • 2001
  • In this paper, a development of the sensor fusion algorithm for a visual control of mobile robot is presented. The output data from the visual sensor include a time-lag due to the image processing computation. The sampling rate of the visual sensor is considerably low so that it should be used with other sensors to control fast motion. The main purpose of this paper is to develop a method which constitutes a sensor fusion system to give the optimal state estimates. The proposed sensor fusion system combines the visual sensor and inertial sensor using a modified Kalman filter. A kind of multi-rate Kalman filter which treats the slow sampling rate ...

  • PDF

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Generation of Simulated Image from Atmospheric Corrected Landsat TM Images (대기보정된 Landsat TM 영상으로부터 모의영상 제작)

  • Lee, Soo Bong;La, Phu Hien;Eo, Yang Dam;Pyeon, Mu Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • A remote sensed image simulation following to weather and season conditions can be performed by a reverse atmospheric correction which is a function of image preprocessing. In this study, we have made an experiment to generate the simulated image to the raw image, which is prior to the atmospheric corrected images under the specific weather conditions. The applied methods in this study were the Forster algorithm (1984) and 6S RTM (Radiative Transfer Model). The simulated images has been compared with the original image to analyze compliances. In fact, the results from 6S RTM method show better compliances than Forster, with a mean of RMSE of DN difference 9.35 and a mean of $R^2$ 0.7. In conclusion, a simulated image has practical feasibility when similar to the period and season as the reference image.

High Spatial Resolution Satellite Image Simulation Based on 3D Data and Existing Images

  • La, Phu Hien;Jeon, Min Cheol;Eo, Yang Dam;Nguyen, Quang Minh;Lee, Mi Hee;Pyeon, Mu Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.2
    • /
    • pp.121-132
    • /
    • 2016
  • This study proposes an approach for simulating high spatial resolution satellite images acquired under arbitrary sun-sensor geometry using existing images and 3D (three-dimensional) data. First, satellite images, having significant differences in spectral regions compared with those in the simulated image were transformed to the same spectral regions as those in simulated image by using the UPDM (Universal Pattern Decomposition Method). Simultaneously, shadows cast by buildings or high features under the new sun position were modeled. Then, pixels that changed from shadow into non-shadow areas and vice versa were simulated on the basis of existing images. Finally, buildings that were viewed under the new sensor position were modeled on the basis of open library-based 3D reconstruction program. An experiment was conducted to simulate WV-3 (WorldView-3) images acquired under two different sun-sensor geometries based on a Pleiades 1A image, an additional WV-3 image, a Landsat image, and 3D building models. The results show that the shapes of the buildings were modeled effectively, although some problems were noted in the simulation of pixels changing from shadows cast by buildings into non-shadow. Additionally, the mean reflectance of the simulated image was quite similar to that of actual images in vegetation and water areas. However, significant gaps between the mean reflectance of simulated and actual images in soil and road areas were noted, which could be attributed to differences in the moisture content.

Infrared Image Sharpness Enhancement Method Using Super-resolution Based on Adaptive Dynamic Range Coding and Fusion with Visible Image (적외선 영상 선명도 개선을 위한 ADRC 기반 초고해상도 기법 및 가시광 영상과의 융합 기법)

  • Kim, Yong Jun;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.11
    • /
    • pp.73-81
    • /
    • 2016
  • In general, infrared images have less sharpness and image details than visible images. So, the prior image upscaling methods are not effective in the infrared images. In order to solve this problem, this paper proposes an algorithm which initially up-scales an input infrared (IR) image by using adaptive dynamic range encoding (ADRC)-based super-resolution (SR) method, and then fuses the result with the corresponding visible images. The proposed algorithm consists of a up-scaling phase and a fusion phase. First, an input IR image is up-scaled by the proposed ADRC-based SR algorithm. In the dictionary learning stage of this up-scaling phase, so-called 'pre-emphasis' processing is applied to training-purpose high-resolution images, hence better sharpness is achieved. In the following fusion phase, high-frequency information is extracted from the visible image corresponding to the IR image, and it is adaptively weighted according to the complexity of the IR image. Finally, a up-scaled IR image is obtained by adding the processed high-frequency information to the up-scaled IR image. The experimental results show than the proposed algorithm provides better results than the state-of-the-art SR, i.e., anchored neighborhood regression (A+) algorithm. For example, in terms of just noticeable blur (JNB), the proposed algorithm shows higher value by 0.2184 than the A+. Also, the proposed algorithm outperforms the previous works even in terms of subjective visual quality.

Comparative Analysis of Image Fusion Methods According to Spectral Responses of High-Resolution Optical Sensors (고해상 광학센서의 스펙트럼 응답에 따른 영상융합 기법 비교분석)

  • Lee, Ha-Seong;Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.227-239
    • /
    • 2014
  • This study aims to evaluate performance of various image fusion methods based on the spectral responses of high-resolution optical satellite sensors such as KOMPSAT-2, QuickBird and WorldView-2. The image fusion methods used in this study are GIHS, GIHSA, GS1 and AIHS. A quality evaluation of each image fusion method was performed with both quantitative and visual analysis. The quantitative analysis was carried out using spectral angle mapper index (SAM), relative global dimensional error (spectral ERGAS) and image quality index (Q4). The results indicates that the GIHSA method is slightly better than other methods for KOMPSAT-2 images. On the other hand, the GS1 method is suitable for Quickbird and WorldView-2 images.

Refinement of Disparity Map using the Rule-based Fusion of Area and Feature-based Matching Results

  • Um, Gi-Mun;Ahn, Chung-Hyun;Kim, Kyung-Ok;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.304-309
    • /
    • 1999
  • In this paper, we presents a new disparity map refinement algorithm using statistical characteristics of disparity map and edge information. The proposed algorithm generate a refined disparity map using disparity maps which are obtained from area and feature-based Stereo Matching by selecting a disparity value of edge point based on the statistics of both disparity maps. Experimental results on aerial stereo image show the better results than conventional fusion algorithms in the disparity error. This algorithm can be applied to the reconstruction of building image from the high resolution remote sensing data.

  • PDF

Reconstruction of Buildings from Satellite Image and LIDAR Data

  • Guo, T.;Yasuoka, Y.
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.519-521
    • /
    • 2003
  • Within the paper an approach for the automatic extraction and reconstruction of buildings in urban built-up areas base on fusion of high-resolution satellite image and LIDAR data is presented. The presented data fusion scheme is essentially motivated by the fact that image and range data are quite complementary. Raised urban objects are first segmented from the terrain surface in the LIDAR data by making use of the spectral signature derived from satellite image, afterwards building potential regions are initially detected in a hierarchical scheme. A novel 3D building reconstruction model is also presented based on the assumption that most buildings can be approximately decomposed into polyhedral patches. With the constraints of presented building model, 3D edges are used to generate the hypothesis and follow the verification processes and a subsequent logical processing of the primitive geometric patches leads to 3D reconstruction of buildings with good details of shape. The approach is applied on the test sites and shows a good performance, an evaluation is described as well in the paper.

  • PDF

A Comparative Analysis of IHS, FIHS, PCA, BT and WT Image Fusion Methods Using IKONOS Image Data (IKONOS 영상을 활용한 IHS, FIHS, PCA, BT, WT 영상 융합법의 비교분석)

  • Kim, Hyun;Yu, Jae Ho;Kim, Joong Gon;Seo, Yong Su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.599-602
    • /
    • 2009
  • This paper presents a comparative analysis of five different fusion methods. The five different methods to merge multispectral images and panchromatic image are IHS, FIHS, PCA, BT and WT methods. The comparative analysis based on visual analysis and quantitative analysis are performed using the merged results. From the results the FIHS method provide good result, BT, PCA, IHS and WT method show the next order.

  • PDF