• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.035 seconds

Development of a Vehicle Positioning Algorithm Using Reference Images (기준영상을 이용한 차량 측위 알고리즘 개발)

  • Kim, Hojun;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1131-1142
    • /
    • 2018
  • The autonomous vehicles are being developed and operated widely because of the advantages of reducing the traffic accident and saving time and cost for driving. The vehicle localization is an essential component for autonomous vehicle operation. In this paper, localization algorithm based on sensor fusion is developed for cost-effective localization using in-vehicle sensors, GNSS, an image sensor and reference images that made in advance. Information of the reference images can overcome the limitation of the low positioning accuracy that occurs when only the sensor information is used. And it also can acquire estimated result of stable position even if the car is located in the satellite signal blockage area. The particle filter is used for sensor fusion that can reflect various probability density distributions of individual sensors. For evaluating the performance of the algorithm, a data acquisition system was built and the driving data and the reference image data were acquired. Finally, we can verify that the vehicle positioning can be performed with an accuracy of about 0.7 m when the route image and the reference image information are integrated with the route path having a relatively large error by the satellite sensor.

A Data Fusion Algorithm of the Nonlinear System Based on Filtering Step By Step

  • Wen Cheng-Lin;Ge Quan-Bo
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2006
  • This paper proposes a data fusion algorithm of nonlinear multi sensor dynamic systems of synchronous sampling based on filtering step by step. Firstly, the object state variable at the next time index can be predicted by the previous global information with the systems, then the predicted estimation can be updated in turn by use of the extended Kalman filter when all of the observations aiming at the target state variable arrive. Finally a fusion estimation of the object state variable is obtained based on the system global information. Synchronously, we formulate the new algorithm and compare its performances with those of the traditional nonlinear centralized and distributed data fusion algorithms by the indexes that include the computational complexity, data communicational burden, time delay and estimation accuracy, etc.. These compared results indicate that the performance from the new algorithm is superior to the performances from the two traditional nonlinear data fusion algorithms.

3D motion estimation using multisensor data fusion (센서융합을 이용한 3차원 물체의 동작 예측)

  • 양우석;장종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.679-684
    • /
    • 1993
  • This article presents an approach to estimate the general 3D motion of a polyhedral object using multiple, sensory data some of which may not provide sufficient information for the estimation of object motion. Motion can be estimated continuously from each sensor through the analysis of the instantaneous state of an object. We have introduced a method based on Moore-Penrose pseudo-inverse theory to estimate the instantaneous state of an object. A linear feedback estimation algorithm is discussed to estimate the object 3D motion. Then, the motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown object. The techniques of multisensor data fusion can be categorized into three methods: averaging, decision, and guiding. We present a fusion algorithm which combines averaging and decision.

  • PDF

Uncertainty Fusion of Sensory Information Using Fuzzy Numbers

  • Park, Sangwook;Lee, C. S. George
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1001-1004
    • /
    • 1993
  • The Multisensor Fusion Problem (MFP) deals with the methodologies involved in effectively combining together homogeneous or non-homegeneous information obtained from multiple redundant or disparate sensors in order to perform a task more accurately, efficiently, and reliably. The inherent uncertainties in the sensory information are represented using Fuzzy Numbers, -numbers, and the Uncertainty-Reductive Fusion Technique (URFT) is introduced to combine the multiple sensory information into one consensus -number. The MFP is formulated from the Information Theory perspective where sensors are viewed as information sources with a fixed output alphabet and systems are modeled as a network of information processing and processing and propagating channels. The performance of the URFT is compared with other fusion techniques in solving the 3-Sensor Problem.

  • PDF

GPS/INS Fusion Using Multiple Compensation Method Based on Kalman Filter (칼만 필터를 이용한 GPS/INS융합의 다중 보정 방법)

  • Kwon, Youngmin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.190-196
    • /
    • 2015
  • In this paper, we propose multiple location error compensation algorithm for GPS/INS fusion using kalman filter and introduce the way to reduce location error in 9-axis navigation devices for implementing inertial navigation technique. When evaluating location, there is an increase of location error. So navigation systems need robust algorithms to compensate location error in GPS/INS fusion. In order to improve robustness of 9-axis inertial sensor(mpu-9150) over its disturbance, we used tilt compensation method using compensation algorithm of acceleration sensor and Yaw angle compensation to have exact azimuth information of the object. And it shows improved location result using these methods combined with kalman filter.

Gate Data Gathering in WiFi-embedded Smart Shoes with Gyro and Acceleration Sensor

  • Jeong, KiMin;Lee, Kyung-chang
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.459-465
    • /
    • 2019
  • There is an increasing interest in health and research on methods for measuring human body information. The importance of continuously observing information such as the step change and the walking speed is increasing. At a person's gait, information about the disease and the currently weakened area can be known. In this paper, gait is measured using wearable walking module built in shoes. We want to make continuous measurement possible by simplifying gait measurement method. This module is designed to receive information of gyro sensor and acceleration sensor. The designed module is capable of WiFi communication and the collected walking information is stored in the server. The information stored in the server is corrected by integrating the acceleration sensor and the gyro sensor value. A band-pass filter was used to reduce the error. This data is categorized by the Gait Finder into walking and waiting states. When walking, each step is divided and stored separately for analysis.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

A Fusion Algorithm considering Error Characteristics of the Multi-Sensor (다중센서 오차특성을 고려한 융합 알고리즘)

  • Hyun, Dae-Hwan;Yoon, Hee-Byung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.274-282
    • /
    • 2009
  • Various location tracking sensors; such as GPS, INS, radar, and optical equipment; are used for tracking moving targets. In order to effectively track moving targets, it is necessary to develop an effective fusion method for these heterogeneous devices. There have been studies in which the estimated values of each sensors were regarded as different models and fused together, considering the different error characteristics of the sensors for the improvement of tracking performance using heterogeneous multi-sensor. However, the rate of errors for the estimated values of other sensors has increased, in that there has been a sharp increase in sensor errors and the attempts to change the estimated sensor values for the Sensor Probability could not be applied in real time. In this study, the Sensor Probability is obtained by comparing the RMSE (Root Mean Square Error) for the difference between the updated and measured values of the Kalman filter for each sensor. The process of substituting the new combined values for the Kalman filter input values for each sensor is excluded. There are improvements in both the real-time application of estimated sensor values, and the tracking performance for the areas in which the sensor performance has rapidly decreased. The proposed algorithm adds the error characteristic of each sensor as a conditional probability value, and ensures greater accuracy by performing the track fusion with the sensors with the most reliable performance. The trajectory of a UAV is generated in an experiment and a performance analysis is conducted with other fusion algorithms.

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.