• Title/Summary/Keyword: Multi-sensor Fusion

Search Result 201, Processing Time 0.039 seconds

Localization and Control of an Outdoor Mobile Robot Based on an Estimator with Sensor Fusion (센서 융합기반의 추측항법을 통한 야지 주행 이동로봇의 위치 추정 및 제어)

  • Jeon, Sang Woon;Jeong, Seul
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.2
    • /
    • pp.69-78
    • /
    • 2009
  • Localization is a very important technique for the mobile robot to navigate in outdoor environment. In this paper, the development of the sensor fusion algorithm for controlling mobile robots in outdoor environments is presented. The multi-sensorial dead-reckoning subsystem is established based on the optimal filtering by first fusing a heading angle reading data from a magnetic compass, a rate-gyro, and two encoders mounted on the robot wheels, thereby computing the dead-reckoned location. These data and the position data provided by a global sensing system are fused together by means of an extended Kalman filter. The proposed algorithm is proved by simulation studies of controlling a mobile robot controlled by a backstepping controller and a cascaded controller. Performances of each controller are compared.

  • PDF

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.6
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Multi-sensor Fusion Based Guidance and Navigation System Design of Autonomous Mine Disposal System Using Finite State Machine (유한 상태 기계를 이용한 자율무인기뢰처리기의 다중센서융합기반 수중유도항법시스템 설계)

  • Kim, Ki-Hun;Choi, Hyun-Taek;Lee, Chong-Moo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.33-42
    • /
    • 2010
  • This research propose a practical guidance system considering ocean currents in real sea operation. Optimality of generated path is not an issue in this paper. Way-points from start point to possible goal positions are selected by experienced human supervisors considering major ocean current axis. This paper also describes the implementation of a precise underwater navigation solution using multi-sensor fusion technique based on USBL, GPS, DVL and AHRS measurements in detail. To implement the precise, accurate and frequent underwater navigation solution, three strategies are chosen. The first one is the heading alignment angle identification to enhance the performance of standalone dead-reckoning algorithm. The second one is that absolute position is fused timely to prevent accumulation of integration error, where the absolute position can be selected between USBL and GPS considering sensor status. The third one is introduction of effective outlier rejection algorithm. The performance of the developed algorithm is verified with experimental data of mine disposal vehicle and deep-sea ROV.

The Performance Analysis of IMM-MPDA Filter in Multi-lag Out of Sequence Measurement Environment (Multi-lag Out of Sequence Measurement 환경에서의 IMM-MPDA 필터 성능 분석)

  • Seo, Il-Hwan;Song, Taek-Lyul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.8
    • /
    • pp.1476-1483
    • /
    • 2007
  • In a multi-sensor target tracking systems, the local sensors have the role of tracking the target and transferring the measurements to the fusion center. The measurements from the same target can arrive out of sequence called, the out-of-sequence measurements(OOSMs). The OOSM can arise in a form of single-lag or multi-lag throughout the transfer at the fusion center. The recursive retrodiction step was proposed to update the current state estimates with the multi-lag OOSM from the several previous papers. The real world has the possible situations that the maneuvering target informations can arrive at the fusion center with the random clutter in the possible OOSMs. In this paper, we incorporate the IMM-MPDA(Interacting Multiple Model - Most Probable Data Association) into the multi-lag OOSM update. The performance of the IMM-MPDA filter with multi-lag OOSM update is analyzed for the various clutter densities, OOSM lag numbers, and target maneuvering indexes. Simulation results show that IMM-MPDA is sufficient to be used in out of sequence environment and it is necessary to correct the current state estimates with OOSM except a very old OOSM.

Image Fusion of High Resolution SAR and Optical Image Using High Frequency Information (고해상도 SAR와 광학영상의 고주파 정보를 이용한 다중센서 융합)

  • Byun, Young-Gi;Chae, Tae-Byeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.75-86
    • /
    • 2012
  • Synthetic Aperture Radar(SAR) imaging system is independent of solar illumination and weather conditions; however, SAR image is difficult to interpret as compared with optical images. It has been increased interest in multi-sensor fusion technique which can improve the interpretability of $SAR^{\circ\circ}$ images by fusing the spectral information from multispectral(MS) image. In this paper, a multi-sensor fusion method based on high-frequency extraction process using Fast Fourier Transform(FFT) and outlier elimination process is proposed, which maintain the spectral content of the original MS image while retaining the spatial detail of the high-resolution SAR image. We used TerraSAR-X which is constructed on the same X-band SAR system as KOMPSAT-5 and KOMPSAT-2 MS image as the test data set to evaluate the proposed method. In order to evaluate the efficiency of the proposed method, the fusion result was compared visually and quantitatively with the result obtained using existing fusion algorithms. The evaluation results showed that the proposed image fusion method achieved successful results in the fusion of SAR and MS image compared with the existing fusion algorithms.

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.