• Title/Summary/Keyword: multisensor data fusion

Search Result 31, Processing Time 0.028 seconds

Reducing Spectral Signature Confusion of Optical Sensor-based Land Cover Using SAR-Optical Image Fusion Techniques

  • ;Tateishi, Ryutaro;Wikantika, Ketut;M.A., Mohammed Aslam
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.107-109
    • /
    • 2003
  • Optical sensor-based land cover categories produce spectral signature confusion along with degraded classification accuracy. In the classification tasks, the goal of fusing data from different sensors is to reduce the classification error rate obtained by single source classification. This paper describes the result of land cover/land use classification derived from solely of Landsat TM (TM) and multisensor image fusion between JERS 1 SAR (JERS) and TM data. The best radar data manipulation is fused with TM through various techniques. Classification results are relatively good. The highest Kappa Coefficient is derived from classification using principal component analysis-high pass filtering (PCA+HPF) technique with the Overall Accuracy significantly high.

  • PDF

Rao-Blackwellized Multiple Model Particle Filter Data Fusion algorithm (Rao-Blackwellized Multiple Model Particle Filter자료융합 알고리즘)

  • Kim, Do-Hyeung
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.556-561
    • /
    • 2011
  • It is generally known that particle filters can produce consistent target tracking performance in comparison to the Kalman filter for non-linear and non-Gaussian systems. In this paper, I propose a Rao-Blackwellized multiple model particle filter(RBMMPF) to enhance computational efficiency of the particle filters as well as to reduce sensitivity of modeling. Despite that the Rao-Blackwellized particle filter needs less particles than general particle filter, it has a similar tracking performance with a less computational load. Comparison results for performance is listed for the using single sensor information RBMMPF and using multisensor data fusion RBMMPF.

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

Obstacle Avoidance and Planning using Optimization of Cost Fuction based Distributed Control Command (분산제어명령 기반의 비용함수 최소화를 이용한 장애물회피와 주행기법)

  • Bae, Dongseog;Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.3
    • /
    • pp.125-131
    • /
    • 2018
  • In this paper, we propose a homogeneous multisensor-based navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments with moving obstacles using multi-ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method by fuzzy inference is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor using fuzzy inference is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we performed simulations in PC as well as real experiments with mobile robot, AmigoBot. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

Accurate Vehicle Positioning on a Numerical Map

  • Laneurit Jean;Chapuis Roland;Chausse Fr d ric
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.15-31
    • /
    • 2005
  • Nowadays, the road safety is an important research field. One of the principal research topics in this field is the vehicle localization in the road network. This article presents an approach of multi sensor fusion able to locate a vehicle with a decimeter precision. The different informations used in this method come from the following sensors: a low cost GPS, a numeric camera, an odometer and a steer angle sensor. Taking into account a complete model of errors on GPS data (bias on position and nonwhite errors) as well as the data provided by an original approach coupling a vision algorithm with a precise numerical map allow us to get this precision.

Precise Geometric Registration of Aerial Imagery and LIDAR Data

  • Choi, Kyoung-Ah;Hong, Ju-Seok;Lee, Im-Pyeong
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.506-516
    • /
    • 2011
  • In this paper, we develop a registration method to eliminate the geometric inconsistency between the stereo-images and light detection and ranging (LIDAR) data obtained by an airborne multisensor system. This method consists of three steps: registration primitive extraction, correspondence establishment, and exterior orientation parameter (EOP) adjustment. As the primitives, we employ object points and linked edges from the stereo-images and planar patches and intersection edges from the LIDAR data. After extracting these primitives, we establish the correspondence between them, being classified into vertical and horizontal groups. These corresponding pairs are simultaneously incorporated as stochastic constraints into aerial triangulation based on the bundle block adjustment. Finally, the EOPs of the images are adjusted to minimize the inconsistency. The results from the application of our method to real data demonstrate that the inconsistency between both data sets is significantly reduced from the range of 0.5 m to 2 m to less than 0.05 m. Hence, the results show that the proposed method is useful for the data fusion of aerial images and LIDAR data.

Recognition of Tactilie Image Dependent on Imposed Force Using Fuzzy Fusion Algorithm (접촉력에 따라 변하는 Tactile 영상의 퍼지 융합을 통한 인식기법)

  • 고동환;한헌수
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.95-103
    • /
    • 1998
  • This paper deals with a problem occuring in recognition of tactile images due to the effects of imposed force at a me urement moment. Tactile image of a contact surface, used for recognition of the surface type, varies depending on the forces imposed so that a false recognition may result in. This paper fuzzifies two parameters of the contour of a tactile image with the membership function formed by considering the imposed force. Two fuzzifed paramenters are fused by the average Minkowski's dist; lnce. The proposed algorithm was implemented on the multisensor system cnmposed of an optical tact le sensor and a 6 axes forceltorque sensor. By the experiments, the proposed algorithm has shown average recognition ratio greater than 869% over all imposed force ranges and object models which is about 14% enhancement comparing to the case where only the contour information is used. The pro- ~oseda lgorithm can be used for end-effectors manipulating a deformable or fragile objects or for recognition of 3D objects by implementing on multi-fingered robot hand.

  • PDF

Dempster-Shafer Fusion of Multisensor Imagery Using Gaussian Mass Function (Gaussian분포의 질량함수를 사용하는 Dempster-Shafer영상융합)

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.6
    • /
    • pp.419-425
    • /
    • 2004
  • This study has proposed a data fusion method based on the Dempster-Shafer evidence theory The Dempster-Shafer fusion uses mass functions obtained under the assumption of class-independent Gaussian assumption. In the Dempster-Shafer approach, uncertainty is represented by 'belief interval' equal to the difference between the values of 'belief' function and 'plausibility' function which measure imprecision and uncertainty By utilizing the Dempster-Shafer scheme to fuse the data from multiple sensors, the results of classification can be improved. It can make the users consider the regions with mixed classes in a training process. In most practices, it is hard to find the regions with a pure class. In this study, the proposed method has applied to the KOMPSAT-EOC panchromatic image and LANDSAT ETM+ NDVI data acquired over Yongin/Nuengpyung. area of Kyunggi-do. The results show that it has potential of effective data fusion for multiple sensor imagery.