• Title/Summary/Keyword: SLAM (Simultaneous Localization And Mapping)

Search Result 121, Processing Time 0.024 seconds

Improvement of SLAM Using Invariant EKF for Autonomous Vehicles (Invariant EKF를 사용한 자율 이동체의 SLAM 개선)

  • Jeong, Da-Bin;Ko, Nak-Yong;Chung, Jun-Hyuk;Pyun, Jae-Young;Hwang, Suk-Seung;Kim, Tae-Woon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.237-244
    • /
    • 2020
  • This paper describes an implement of Simultaneous Localization and Mapping(SLAM) in two dimensional space. The method uses Invariant Extended Kalman Filter(IEKF), which transforms the state variables and measurement variables so that the transformed variables constitute a linear space when variables called the invariant quantities are kept constant. Therefore, the IEKF guarantees convergence provided in the invariant quantities are kept constant. The proposed IEKF approach uses Lie group matrix for the transformation. The method is tested through simulation, and the results show that the Kalman gain is constant as it is the case for the linear Kalman filter. The coherence between the estimated locations of the vehicle and the detected objects verifies the estimation performance of the method.

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

Location Measurement System for Automated Operation of Construction Machinery Using Visual SLAM

  • Masaki CHINO;Atsushi YAMASHITA
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1019-1026
    • /
    • 2024
  • In the construction industry, there is a growing demand for improving productivity, and development of autonomous operation systems for construction machinery is progressing. Autonomous operation of construction machinery requires positioning information because construction must be carried out at planned locations. In this paper, we focused on Visual Simultaneous Localization and Mapping (Visual SLAM) as a method for obtaining location information for construction machinery and proposed an automated operation system using Visual SLAM. For automated driving, the indirect method based on ORB features is used in Visual SLAM, and processes such as mask processing for surrounding moving objects and measurement of initial positions using markers are performed. With the proposed system, it was confirmed that it is possible to perform automated operation in an experimental environment using the location information output by Visual SLAM. In addition, the experiment was conducted to verify the measurement accuracy when using Visual SLAM during construction work at actual construction sites. As a result, the measurement accuracy was less than 500 mm, which is a usable accuracy for actual construction. By using this system, it is possible to obtain the location information of construction machinery even in environments where GNSS cannot be used, and productivity at construction sites can be improved by performing automated operation.

Performance Analysis of Optimization Method and Filtering Method for Feature-based Monocular Visual SLAM (특징점 기반 단안 영상 SLAM의 최적화 기법 및 필터링 기법 성능 분석)

  • Jeon, Jin-Seok;Kim, Hyo-Joong;Shim, Duk-Sun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.182-188
    • /
    • 2019
  • Autonomous mobile robots need SLAM (simultaneous localization and mapping) to look for the location and simultaneously to make the map around the location. In order to achieve visual SLAM, it is necessary to form an algorithm that detects and extracts feature points from camera images, and gets the camera pose and 3D points of the features. In this paper, we propose MPROSAC algorithm which combines MSAC and PROSAC, and compare the performance of optimization method and the filtering method for feature-based monocular visual SLAM. Sparse Bundle Adjustment (SBA) is used for the optimization method and the extended Kalman filter is used for the filtering method.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

A New Method for Relative/Quantitative Comparison of Map Built by SLAM (SLAM으로 작성한 지도 품질의 상대적/정량적 비교를 위한 방법 제안)

  • Kwon, Tae-Bum;Chang, Woo-Sok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.4
    • /
    • pp.242-249
    • /
    • 2014
  • By a SLAM (simultaneous localization and mapping) method, we get a map of an environment for autonomous navigation of a robot. In this case, we want to know how accurate the map is. Or we want to know which map is more accurate when different maps can be obtained by different SLAM methods. So, several methods for map comparison have been studied, but they have their own drawbacks. In this paper, we propose a new method which compares the accuracy or error of maps relatively and quantitatively. This method sets many corresponding points on both reference map and SLAM map, and computes the translational and rotational values of all corresponding points using least-squares solution. Analyzing the standard deviations of all translational and rotational values, we can know the error of two maps. This method can consider both local and global errors while other methods can deal with one of them, and this is verified by a series of simulations and real world experiments.

Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps (천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM)

  • Hwang, Seo-Yeon;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Result Representation of Rao-Blackwellized Particle Filter for Mobile Robot SLAM (Rao-Blackwellized 파티클 필터를 이용한 이동로봇의 위치 및 환경 인식 결과 도출)

  • Kwak, No-San;Lee, Beom-Hee;Yokoi, Kazuhito
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.308-314
    • /
    • 2008
  • Recently, simultaneous localization and mapping (SLAM) approaches employing Rao-Blackwellized particle filter (RBPF) have shown good results. However, no research is conducted to analyze the result representation of SLAM using RBPF (RBPF-SLAM) when particle diversity is preserved. After finishing the particle filtering, the results such as a map and a path are stored in the separate particles. Thus, we propose several result representations and provide the analysis of the representations. For the analysis, estimation errors and their variances, and consistency of RBPF-SLAM are dealt in this study. According to the simulation results, combining data of each particle provides the better result with high probability than using just data of a particle such as the highest weighted particle representation.

  • PDF

Mobile Robot Localization in Geometrically Similar Environment Combining Wi-Fi with Laser SLAM

  • Gengyu Ge;Junke Li;Zhong Qin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1339-1355
    • /
    • 2023
  • Localization is a hot research spot for many areas, especially in the mobile robot field. Due to the weak signal of the global positioning system (GPS), the alternative schemes in an indoor environment include wireless signal transmitting and receiving solutions, laser rangefinder to build a map followed by a re-localization stage and visual positioning methods, etc. Among all wireless signal positioning techniques, Wi-Fi is the most common one. Wi-Fi access points are installed in most indoor areas of human activities, and smart devices equipped with Wi-Fi modules can be seen everywhere. However, the localization of a mobile robot using a Wi-Fi scheme usually lacks orientation information. Besides, the distance error is large because of indoor signal interference. Another research direction that mainly refers to laser sensors is to actively detect the environment and achieve positioning. An occupancy grid map is built by using the simultaneous localization and mapping (SLAM) method when the mobile robot enters the indoor environment for the first time. When the robot enters the environment again, it can localize itself according to the known map. Nevertheless, this scheme only works effectively based on the prerequisite that those areas have salient geometrical features. If the areas have similar scanning structures, such as a long corridor or similar rooms, the traditional methods always fail. To address the weakness of the above two methods, this work proposes a coarse-to-fine paradigm and an improved localization algorithm that utilizes Wi-Fi to assist the robot localization in a geometrically similar environment. Firstly, a grid map is built by using laser SLAM. Secondly, a fingerprint database is built in the offline phase. Then, the RSSI values are achieved in the localization stage to get a coarse localization. Finally, an improved particle filter method based on the Wi-Fi signal values is proposed to realize a fine localization. Experimental results show that our approach is effective and robust for both global localization and the kidnapped robot problem. The localization success rate reaches 97.33%, while the traditional method always fails.