• 제목/요약/키워드: Multisensor Data Fusion

검색결과 31건 처리시간 0.03초

Reducing Spectral Signature Confusion of Optical Sensor-based Land Cover Using SAR-Optical Image Fusion Techniques

  • ;Tateishi, Ryutaro;Wikantika, Ketut;M.A., Mohammed Aslam
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.107-109
    • /
    • 2003
  • Optical sensor-based land cover categories produce spectral signature confusion along with degraded classification accuracy. In the classification tasks, the goal of fusing data from different sensors is to reduce the classification error rate obtained by single source classification. This paper describes the result of land cover/land use classification derived from solely of Landsat TM (TM) and multisensor image fusion between JERS 1 SAR (JERS) and TM data. The best radar data manipulation is fused with TM through various techniques. Classification results are relatively good. The highest Kappa Coefficient is derived from classification using principal component analysis-high pass filtering (PCA+HPF) technique with the Overall Accuracy significantly high.

  • PDF

Rao-Blackwellized Multiple Model Particle Filter자료융합 알고리즘 (Rao-Blackwellized Multiple Model Particle Filter Data Fusion algorithm)

  • 김도형
    • 한국항행학회논문지
    • /
    • 제15권4호
    • /
    • pp.556-561
    • /
    • 2011
  • 일반적으로 비선형 시스템에서 particle filter가 Kalman Filter보다 표적추적 성능이 뛰어나다고 알려져 있다. 그러나 particle filter는 많은 연산량을 요구하는 단점이 있다. 본 논문에서는 particle filter 보다 적은 particle의 수, 즉 적은 연산량으로 동일한 성능을 가지는 Rao-Blackwellized particle filter의 모델의 민감성을 줄인 Rao-Blackwellized Multiple Model Particle Filter(RBMMPF)의 알고리즘을 소개하고 이에 다중센서 정보를 융합하는 자료융합 기법을 적용하였다. 시뮬레이션을 통해 단일센서 정보를 이용한 RBMMPF 표적추적 성능과 다중센서정보를 융합한 RBMMPF의 표적추적 성능을 비교, 분석하였다.

퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류 (Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector)

  • 이상훈
    • 대한원격탐사학회지
    • /
    • 제19권4호
    • /
    • pp.329-339
    • /
    • 2003
  • 본 연구에서는 무감독 영상분류를 위하여 특성이 다른 센서로 수집된 영상들에 대한 의사결정 수준의 영상 융합기법을 제안하였다. 제안된 기법은 공간 확장 분할에 근거한 무감독 계층군집 영상분류기법을 개개의 센서에서 수집된 영상에 독립적으로 적용한 후 그 결과로 생성되는 분할지역의 퍼지 클래스 벡터(fuzzy class vector)를 이용하여 각 센서의 분류 결과를 융합한다. 퍼지 클래스벡터는 분할지역이 각 클래스에 속할 확률을 표시하는 지시(indicator) 벡터로 간주되며 기대 최대화 (EM: Expected Maximization) 추정 법에 의해 관련 변수의 최대 우도 추정치가 반복적으로 계산되어진다. 본 연구에서는 같은 특성의 센서 혹은 밴드 별로 분할과 분류를 수행한 후 분할지역의 분류결과를 퍼지 클래스 벡터를 이용하여 합성하는 접근법을 사용하고 있으므로 일반적으로 다중센서의 영상의 분류기법에 사용하는 화소수준의 영상융합기법에서처럼 서로 다른 센서로부터 수집된 영상의 화소간의 공간적 일치에 대한 높은 정확도를 요구하지 않는다. 본 연구는 한반도 전라북도 북서지역에서 관측된 다중분광 SPOT 영상자료와 AIRSAR 영상자료에 적용한 결과 제안된 영상 융합기법에 의한 피복 분류는 확장 벡터의 접근법에 의한 영상 융합보다 서로 다른 센서로부터 얻어지는 정보를 더욱 적합하게 융합한다는 것을 보여주고 있다.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • 한국측량학회지
    • /
    • 제36권3호
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

분산제어명령 기반의 비용함수 최소화를 이용한 장애물회피와 주행기법 (Obstacle Avoidance and Planning using Optimization of Cost Fuction based Distributed Control Command)

  • 배동석;진태석
    • 한국산업융합학회 논문집
    • /
    • 제21권3호
    • /
    • pp.125-131
    • /
    • 2018
  • In this paper, we propose a homogeneous multisensor-based navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments with moving obstacles using multi-ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method by fuzzy inference is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor using fuzzy inference is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we performed simulations in PC as well as real experiments with mobile robot, AmigoBot. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

Accurate Vehicle Positioning on a Numerical Map

  • Laneurit Jean;Chapuis Roland;Chausse Fr d ric
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권1호
    • /
    • pp.15-31
    • /
    • 2005
  • Nowadays, the road safety is an important research field. One of the principal research topics in this field is the vehicle localization in the road network. This article presents an approach of multi sensor fusion able to locate a vehicle with a decimeter precision. The different informations used in this method come from the following sensors: a low cost GPS, a numeric camera, an odometer and a steer angle sensor. Taking into account a complete model of errors on GPS data (bias on position and nonwhite errors) as well as the data provided by an original approach coupling a vision algorithm with a precise numerical map allow us to get this precision.

Precise Geometric Registration of Aerial Imagery and LIDAR Data

  • Choi, Kyoung-Ah;Hong, Ju-Seok;Lee, Im-Pyeong
    • ETRI Journal
    • /
    • 제33권4호
    • /
    • pp.506-516
    • /
    • 2011
  • In this paper, we develop a registration method to eliminate the geometric inconsistency between the stereo-images and light detection and ranging (LIDAR) data obtained by an airborne multisensor system. This method consists of three steps: registration primitive extraction, correspondence establishment, and exterior orientation parameter (EOP) adjustment. As the primitives, we employ object points and linked edges from the stereo-images and planar patches and intersection edges from the LIDAR data. After extracting these primitives, we establish the correspondence between them, being classified into vertical and horizontal groups. These corresponding pairs are simultaneously incorporated as stochastic constraints into aerial triangulation based on the bundle block adjustment. Finally, the EOPs of the images are adjusted to minimize the inconsistency. The results from the application of our method to real data demonstrate that the inconsistency between both data sets is significantly reduced from the range of 0.5 m to 2 m to less than 0.05 m. Hence, the results show that the proposed method is useful for the data fusion of aerial images and LIDAR data.

접촉력에 따라 변하는 Tactile 영상의 퍼지 융합을 통한 인식기법 (Recognition of Tactilie Image Dependent on Imposed Force Using Fuzzy Fusion Algorithm)

  • 고동환;한헌수
    • 한국지능시스템학회논문지
    • /
    • 제8권3호
    • /
    • pp.95-103
    • /
    • 1998
  • 접촉센서가 제공하는 tactile영상을 이용하여 접촉면의 형태를 인식할 때 영상의 모양은 접촉면에 가해지는 힘의 크기에 따라 변화된다. 따라서 많은 노력에도 부루하고 tactile 센서만을 이용하여 접촉면의 형태를 완전히 인식하는 것은 매우 어려운 일로 인식되고 있다. 본 논문에서는 이러한 문제를 해결하기 위해 tactile 영상이 얻어지는 때의 힘을 동시에 측정하고 힘에 따라 변화하는 영상의 모양을 퍼지융합 알고리즘을 이용하여 인식하는 방법을 제안한다. 접촉센서의 tactile 영상은 eigen vector해석 방벅을 적용하여 장축과 단축의 길이로 표현된다. 이들은 접촉 시에 가해지는 힘의 분포에 따른 경계선의 변호를 측정하여 만들어진 소속함수에 의해 퍼지화되며 Averaged Minkowski's distance를 이용하여 융합된다. 제안된 알고리즘은 다중센서시스템에 구현하여 실험하였으며 측정 시에 가해지는 힘의 크기 및 측정면의 종류에 고르게 86% 이상의 인식률을 보여 주었다. 제안된 알고리즘은 복수개의 손가락을 갖는 로봇의 손에 구현하면 작은 힘에도 변형되는 물체의 정밀한 조자이나 인식에 응용될 수 있다.

  • PDF

Gaussian분포의 질량함수를 사용하는 Dempster-Shafer영상융합 (Dempster-Shafer Fusion of Multisensor Imagery Using Gaussian Mass Function)

  • 이상훈
    • 대한원격탐사학회지
    • /
    • 제20권6호
    • /
    • pp.419-425
    • /
    • 2004
  • 본 연구에서는 Dempster-Shafer evidence theory에 기반하여 Gaussian 질량 함수를 사용하는 융합 기법을 제안하고 있다. Dempster-Shafer 융합은 비정확성과 불확실성 measures를 각각 belief 함수와 plausibility 함수로 나타내며 이 두 함수 값 사이의 간격을 나타내는 "belief interval"에 의해 불확실성의 정도가 표현된다. 이러한 Dempster-Shafer 융합기술을 이용하여 서로 다른 센서에서 수집된 영상 자료를 융합하여 사용하여 분류 결과의 정확성을 높이고 특히 분류를 위한 매개변수를 추정하는 훈련과정에서 복합 클래스를 설정할 수 있어 단순 클래스 설정으로 인한 훈련과정이 어려움을 피할 수 있다. 이 연구에서는 경기도 용인/능평 지역에서 관측 된 KOMPSAT EOC의 범색 영상 자료와 LANDSAT ETM+의 식생지수 자료에 대해 제안된 Dempster-Shafer 융합기술을 이용하여 분류 실험을 수행하였고 분류 결과는 서로 다른 센서간의 영상자료 융합을 위한 제안된 기법의 잠재적 효과성을 보여주고 있다.