• Title/Summary/Keyword: vision/inertial sensor fusion

검색결과 12건 처리시간 0.022초

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제11권1호
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹 (Hybrid Inertial and Vision-Based Tracking for VR applications)

  • 구재필;안상철;김형곤;김익재;구열회
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

사격 차선 정렬을 위한 영상 기반의 관성 센서 편차 보상 (Vision Aided Inertial Sensor Bias Compensation for Firing Lane Alignment)

  • 아샤드 어웨이스;박준우;방효충;김윤영;김희수;이용선;최성호
    • 한국항공우주학회지
    • /
    • 제50권9호
    • /
    • pp.617-625
    • /
    • 2022
  • 본 논문은 사격 차선 정렬을 위하여 움직일 수 있는 교정 대상을 이용해 각속도계와 가속도계의 편차를 보상하는 방법을 다룬다. 교정 대상에 대한 정보는 영상 센서를 통해 획득하며 이를 이용해 발사장치에 부착된 관성측정 장치의 오차를 보정한다. 시뮬레이션을 통해 제안한 알고리즘의 성능을 검증하였으며, 특히 관성 좌표계에서 교정 대상에 대한 위치 정보를 정확하게 획득함으로써 발사장치의 관성 센서 편차를 효과적으로 보상할 수 있음을 보인다.

2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발 (Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor)

  • 문종식;이병윤
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

차량정밀측위를 위한 복합측위 기술 동향 (Overview of sensor fusion techniques for vehicle positioning)

  • 박진원;최계원
    • 한국전자통신학회논문지
    • /
    • 제11권2호
    • /
    • pp.139-144
    • /
    • 2016
  • 본 논문에서는 차량정밀측위를 위한 센서융합 기술의 최근 동향에 대해 다룬다. GNSS 만으로는 자율주행에서 요구하는 정밀측위의 정확도 및 신뢰도를 만족시킬 수 없다. 본 논문에서는 GNSS와 주행계, 자이로스코프 등의 관성항법 센서를 결합하는 복합측위 기술을 소개한다. 또한 라이다 및 스테레오 비전에서 탐지된 랜드마크를 정밀지도에 수록된 정보와 매칭시키는 측위 기법의 최근 동향을 소개한다.

센서 융합 기반 정밀 측위를 위한 노면 표시 검출 (Road Surface Marking Detection for Sensor Fusion-based Positioning System)

  • 김동석;정호기
    • 한국자동차공학회논문집
    • /
    • 제22권7호
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

GPS를 활용한 Vision/IMU/OBD 시각동기화 기법 (A Time Synchronization Scheme for Vision/IMU/OBD by GPS)

  • 임준후;최광호;유원재;김라우;이유담;이형근
    • 한국항행학회논문지
    • /
    • 제21권3호
    • /
    • pp.251-257
    • /
    • 2017
  • 차량의 정확한 위치 추정을 위하여 GPS (global positioning system)와 영상 센서, 관성 센서 등을 결합한 복합 측위에 대한 연구가 활발히 진행되고 있다. 본 논문에서는 복합 측위에 있어 중요한 요소 중 하나인 각 센서 간의 시각동기화 기법을 제안한다. 제안된 기법은 GPS 시각 정보를 기준으로 시각동기화 된 영상 센서, 관성 센서와 OBD (on-board diagnostics) 측정치를 획득하는 기법이다. GPS로부터 시각 정보와 위치 정보를 획득하며, 관성 센서로부터 차량의 자세에 관련된 측정치와 OBD를 활용하여 차량의 속력을 획득한다. 영상 센서로부터 획득한 영상에 GPS 시각 정보와 위치 정보, 관성 센서와 OBD의 측정치를 색상으로 변환하여 영상 픽셀에 삽입하는 기법을 제안한다. 또한, 영상에 삽입된 시각동기화 된 센서 측정치들은 변환 과정을 통하여 추출할 수 있다. 각 센서들의 결합을 위하여 임베디드 리눅스 보드를 활용하였으며, 제안된 기법의 성능 평가를 위하여 실제 차량 주행을 통한 실험을 수행하였다.

KUVE (KIST 무인 주행 전기 자동차)의 자율 주행 (Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric))

  • 전창묵;서승범;이상훈;노치원;강성철;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF