• 제목/요약/키워드: visual-inertial odometry

검색결과 9건 처리시간 0.026초

열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리 (Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram)

  • 신재호;전명환;김아영
    • 로봇학회논문지
    • /
    • 제18권3호
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.

속도증분벡터를 활용한 ORB-SLAM 및 관성항법 결합 알고리즘 연구 (Integrated Navigation Algorithm using Velocity Incremental Vector Approach with ORB-SLAM and Inertial Measurement)

  • 김연조;손현진;이영재;성상경
    • 전기학회논문지
    • /
    • 제68권1호
    • /
    • pp.189-198
    • /
    • 2019
  • In recent years, visual-inertial odometry(VIO) algorithms have been extensively studied for the indoor/urban environments because it is more robust to dynamic scenes and environment changes. In this paper, we propose loosely coupled(LC) VIO algorithm that utilizes the velocity vectors from both visual odometry(VO) and inertial measurement unit(IMU) as a filter measurement of Extended Kalman filter. Our approach improves the estimation performance of a filter without adding extra sensors while maintaining simple integration framework, which treats VO as a black box. For the VO algorithm, we employed a fundamental part of the ORB-SLAM, which uses ORB features. We performed an outdoor experiment using an RGB-D camera to evaluate the accuracy of the presented algorithm. Also, we evaluated our algorithm with the public dataset to compare with other visual navigation systems.

BIM model-based structural damage localization using visual-inertial odometry

  • Junyeon Chung;Kiyoung Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • 제31권6호
    • /
    • pp.561-571
    • /
    • 2023
  • Ensuring the safety of a structure necessitates that repairs are carried out based on accurate inspections and records of damage information. Traditional methods of recording damage rely on individual paper-based documents, making it challenging for inspectors to accurately record damage locations and track chronological changes. Recent research has suggested the adoption of building information modeling (BIM) to record detailed damage information; however, localizing damages on a BIM model can be time-consuming. To overcome this limitation, this study proposes a method to automatically localize damages on a BIM model in real-time, utilizing consecutive images and measurements from an inertial measurement unit in close proximity to damages. The proposed method employs a visual-inertial odometry algorithm to estimate the camera pose, detect damages, and compute the damage location in the coordinate of a prebuilt BIM model. The feasibility and effectiveness of the proposed method were validated through an experiment conducted on a campus building. Results revealed that the proposed method successfully localized damages on the BIM model in real-time, with a root mean square error of 6.6 cm.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF

Performance Evaluation of a Compressed-State Constraint Kalman Filter for a Visual/Inertial/GNSS Navigation System

  • Yu Dam Lee;Taek Geun Lee;Hyung Keun Lee
    • Journal of Positioning, Navigation, and Timing
    • /
    • 제12권2호
    • /
    • pp.129-140
    • /
    • 2023
  • Autonomous driving systems are likely to be operated in various complex environments. However, the well-known integrated Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS), which is currently the major source for absolute position information, still has difficulties in accurate positioning in harsh signal environments such as urban canyons. To overcome these difficulties, integrated Visual/Inertial/GNSS (VIG) navigation systems have been extensively studied in various areas. Recently, a Compressed-State Constraint Kalman Filter (CSCKF)-based VIG navigation system (CSCKF-VIG) using a monocular camera, an Inertial Measurement Unit (IMU), and GNSS receivers has been studied with the aim of providing robust and accurate position information in urban areas. For this new filter-based navigation system, on the basis of time-propagation measurement fusion theory, unnecessary camera states are not required in the system state. This paper presents a performance evaluation of the CSCKF-VIG system compared to other conventional navigation systems. First, the CSCKF-VIG is introduced in detail compared to the well-known Multi-State Constraint Kalman Filter (MSCKF). The CSCKF-VIG system is then evaluated by a field experiment in different GNSS availability situations. The results show that accuracy is improved in the GNSS-degraded environment compared to that of the conventional systems.

2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발 (Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor)

  • 문종식;이병윤
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

인식된 문자의 강한 특징점을 활용하는 측위시스템 (Odometry Using Strong Features of Recognized Text)

  • 송도훈;박종일
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2021년도 하계학술대회
    • /
    • pp.219-222
    • /
    • 2021
  • 본 논문에서는 시각-관성 측위시스템(Visual-Inertial Odometry, VIO)에서 광학 문자 인식(Optical Character Recognition, OCR)을 활용해 문자의 영역을 찾아내고, 그 위치를 기억해 측위시스템에서 다시 인식되었을 때 비교하기 위해 위치와 특징점을 저장하고자 한다. 먼저, 실시간으로 움직이는 카메라의 영상에서 문자를 찾아내고, 카메라의 상대적인 위치를 이용하여 문자가 인식된 위치와 특징점을 저장하는 방법을 제안한다. 또한 저장된 문자가 다시 탐색되었을 때, 문자가 재인식되었는 지 판별하기 위한 방법을 제안한다. 인공적인 마커나 미리 학습된 객체를 사용하지 않고 상황에 따른 문자를 사용하는 이 방법은 문자가 존재하는 범용적인 공간에서 사용이 가능하다.

  • PDF

달 탐사 로버의 적응형 움직임 가중치에 따른 스테레오 준직접방식 비주얼 오도메트리 (Stereo Semi-direct Visual Odometry with Adaptive Motion Prior Weights of Lunar Exploration Rover)

  • 정재형;허세종;박찬국
    • 한국항공우주학회지
    • /
    • 제46권6호
    • /
    • pp.479-486
    • /
    • 2018
  • 위성항법시스템이 없는 달 표면에서 탐사 로버의 신뢰성 있는 항법성능을 확보하기 위해 관성측정장치나 카메라와 같은 추가적인 센서를 활용한 항법 알고리즘이 필수적이다. 일례로 미국의 화성 탐사 로버에 스테레오 카메라를 이용한 비주얼 오도메트리(VO)가 성공적으로 사용된 바 있다. 본 논문에서는 달 유사환경의 스테레오 흑백 이미지를 입력받아 달 탐사 로버의 6 자유도 움직임을 추정하였다. 제안하는 알고리즘은 희소 이미지 정렬 기반의 준직접방식 VO를 통해 연속된 이미지간의 상대 움직임을 추정한다. 또한 비선형성에 취약한 직접방식 VO를 보완하고자 최적화 시 로버의 움직임에 따른 가중치를 비용 함수에 고려하였고, 그 가중치는 이전 단계에서 계산된 포즈의 선형 함수로 제안한다. 본 논문에서 제안하는 로버의 움직임에 따른 가중치를 통해 실제 달 환경의 특성을 반영하는 토론토 대학의 달 유사환경 데이터셋에서 VO 성능이 향상됨을 확인하였다.

GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안 (Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map)

  • 김영훈;김재명;김기창;최윤수
    • 대한원격탐사학회지
    • /
    • 제37권5_1호
    • /
    • pp.1095-1109
    • /
    • 2021
  • 안전한 자율주행을 위해 정확한 자기위치 측위와 주변지도 생성은 무엇보다 중요하다. 고가의 고정밀위성항법시스템(Global Positioning System, GPS), 관성측정장치(Inertial Measurement Unit, IMU), 라이다(Light Detection And Ranging, LiDAR), 레이더(Radio Detection And Ranging, RADAR), 주행거리측정계(Wheel odometry) 등의 많은 센서를 조합하여 워크스테이션급의 PC장비를 사용하여 센서데이터를 처리하면, cm급의 정밀한 자기위치 계산 및 주변지도 생성이 가능하다. 하지만 과도한 데이터 정합비용과 경제성 부족으로 고가의 장비 조합은 자율주행의 대중화에 걸림돌이 되고 있다. 본 논문에서는 기존 단안카메라를 사용하는 Monocular Visual SLAM을 발전시켜 RTK가 지원되는 GPS를 센서 융합하여 정확성과 경제성을 동시에 확보하였다. 또한 HD Map을 활용하여 오차를 보정하고 임베디드 PC장비에 포팅하여 도심 도로상에서 RMSE 33.7 cm의 위치 추정 및 주변지도를 생성할 수 있었다. 본 연구에서 제안한 방법으로 안전하고 저렴한 자율주행 시스템 개발과 정확한 정밀도로지도 생성이 가능할 것으로 기대한다.