• 제목/요약/키워드: Simultaneous localization and mapping algorithm

검색결과 52건 처리시간 0.025초

SLAM of a Mobile Robot using Thinning-based Topological Information

  • Lee, Yong-Ju;Kwon, Tae-Bum;Song, Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권5호
    • /
    • pp.577-583
    • /
    • 2007
  • Simultaneous Localization and Mapping (SLAM) is the process of building a map of an unknown environment and simultaneously localizing a robot relative to this map. SLAM is very important for the indoor navigation of a mobile robot and much research has been conducted on this subject. Although feature-based SLAM using an Extended Kalman Filter (EKF) is widely used, it has shortcomings in that the computational complexity grows in proportion to the square of the number of features. This prohibits EKF-SLAM from operating in real time and makes it unfeasible in large environments where many features exist. This paper presents an algorithm which reduces the computational complexity of EKF-SLAM by using topological information (TI) extracted through a thinning process. The global map can be divided into local areas using the nodes of a thinning-based topological map. SLAM is then performed in local instead of global areas. Experimental results for various environments show that the performance and efficiency of the proposed EKF-SLAM/TI scheme are excellent.

Visual SLAM을 통해 획득한 공간 지도의 완성도 평가 시스템 (An Evaluation System to Determine the Completeness of a Space Map Obtained by Visual SLAM)

  • 김한솔;감제원;황성수
    • 한국멀티미디어학회논문지
    • /
    • 제22권4호
    • /
    • pp.417-423
    • /
    • 2019
  • This paper presents an evaluation system to determine the completeness of a space map obtained by a visual SLAM(Simultaneous Localization And Mapping) algorithm. The proposed system consists of three parts. First, the proposed system detects the occurrence of loop closing to confirm that users acquired the information from all directions. Thereafter, the acquired map is divided with regular intervals and is verified whether each area has enough map points to successfully estimate users' position. Finally, to check the effectiveness of each map point, the system checks whether the map points are identifiable even at the location where there is a large distance difference from the acquisition position. Experimental results show that space maps whose completeness is proven by the proposed system has higher stability and accuracy in terms of position estimation than other maps that are not proven.

사전 정보가 없는 배송지에서 장애물 탐지 및 배송 드론의 안전 착륙 지점 선정 기법 (Obstacle Detection and Safe Landing Site Selection for Delivery Drones at Delivery Destinations without Prior Information)

  • 서민철;한상익
    • 자동차안전학회지
    • /
    • 제16권2호
    • /
    • pp.20-26
    • /
    • 2024
  • The delivery using drones has been attracting attention because it can innovatively reduce the delivery time from the time of order to completion of delivery compared to the current delivery system, and there have been pilot projects conducted for safe drone delivery. However, the current drone delivery system has the disadvantage of limiting the operational efficiency offered by fully autonomous delivery drones in that drones mainly deliver goods to pre-set landing sites or delivery bases, and the final delivery is still made by humans. In this paper, to overcome these limitations, we propose obstacle detection and landing site selection algorithm based on a vision sensor that enables safe drone landing at the delivery location of the product orderer, and experimentally prove the possibility of station-to-door delivery. The proposed algorithm forms a 3D map of point cloud based on simultaneous localization and mapping (SLAM) technology and presents a grid segmentation technique, allowing drones to stably find a landing site even in places without prior information. We aims to verify the performance of the proposed algorithm through streaming data received from the drone.

Three-dimensional Map Construction of Indoor Environment Based on RGB-D SLAM Scheme

  • Huang, He;Weng, FuZhou;Hu, Bo
    • 한국측량학회지
    • /
    • 제37권2호
    • /
    • pp.45-53
    • /
    • 2019
  • RGB-D SLAM (Simultaneous Localization and Mapping) refers to the technology of using deep camera as a visual sensor for SLAM. In view of the disadvantages of high cost and indefinite scale in the construction of maps for laser sensors and traditional single and binocular cameras, a method for creating three-dimensional map of indoor environment with deep environment data combined with RGB-D SLAM scheme is studied. The method uses a mobile robot system equipped with a consumer-grade RGB-D sensor (Kinect) to acquire depth data, and then creates indoor three-dimensional point cloud maps in real time through key technologies such as positioning point generation, closed-loop detection, and map construction. The actual field experiment results show that the average error of the point cloud map created by the algorithm is 0.0045m, which ensures the stability of the construction using deep data and can accurately create real-time three-dimensional maps of indoor unknown environment.

무인차량 자율주행을 위한 레이다 영상의 정지물체 너비추정 기법 (Width Estimation of Stationary Objects using Radar Image for Autonomous Driving of Unmanned Ground Vehicles)

  • 김성준;양동원;김수진;정영헌
    • 한국군사과학기술학회지
    • /
    • 제18권6호
    • /
    • pp.711-720
    • /
    • 2015
  • Recently many studies of Radar systems mounted on ground vehicles for autonomous driving, SLAM (Simultaneous localization and mapping) and collision avoidance have been reported. Since several pixels per an object may be generated in a close-range radar application, a width of an object can be estimated automatically by various signal processing techniques. In this paper, we tried to attempt to develop an algorithm to estimate obstacle width using Radar images. The proposed method consists of 5 steps - 1) background clutter reduction, 2) local peak pixel detection, 3) region growing, 4) contour extraction and 5)width calculation. For the performance validation of our method, we performed the test width estimation using a real data of two cars acquired by commercial radar system - I200 manufactured by Navtech. As a result, we verified that the proposed method can estimate the widths of targets.

직선기반 SLAM에서의 루프결합 (Loop Closure in a Line-based SLAM)

  • 장국현;서일홍
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.120-128
    • /
    • 2012
  • The loop closure problem is one of the most challenging issues in the vision-based simultaneous localization and mapping community. It requires the robot to recognize a previously visited place from current camera measurements. While the loop closure often relies on visual bag-of-words based on point features in the previous works, however, in this paper we propose a line-based method to solve the loop closure in the corridor environments. We used both the floor line and the anchored vanishing point as the loop closing feature, and a two-step loop closure algorithm was devised to detect a known place and perform the global pose correction. We propose an anchored vanishing point as a novel loop closure feature, as it includes position information and represents the vanishing points in bi-direction. In our system, the accumulated heading error is reduced using an observation of a previously registered anchored vanishing points firstly, and the observation of known floor lines allows for further pose correction. Experimental results show that our method is very efficient in a structured indoor environment as a suitable loop closure solution.

광범위 환경에서 EKF-SLAM의 일관성 향상을 위한 새로운 관찰모델 (A new Observation Model to Improve the Consistency of EKF-SLAM Algorithm in Large-scale Environments)

  • 남창주;강재현;도낙주
    • 로봇학회논문지
    • /
    • 제7권1호
    • /
    • pp.29-34
    • /
    • 2012
  • This paper suggests a new observation model for Extended Kalman Filter based Simultaneous Localization and Mapping (EKF-SLAM). Since the EKF framework linearizes non-linear functions around the current estimate, the conventional line model has large linearization errors when a mobile robot locates faraway from its initial position. On the other hand, the model that we propose yields less linearization error with respect to the landmark position and thus suitable in a large-scale environment. To achieve it, we build up a three-dimensional space by adding a virtual axis to the robot's two-dimensional coordinate system and extract a plane by using a detected line on the two-dimensional space and the virtual axis. Since Jacobian matrix with respect to the landmark position has small value, we can estimate the position of landmarks better than the conventional line model. The simulation results verify that the new model yields less linearization errors than the conventional line model.

어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피 (Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image)

  • 최윤원;최정원;임성규;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

물류 이송을 위한 딥러닝 기반 특정 사람 추종 모빌리티 제어 연구 (Study of Deep Learning Based Specific Person Following Mobility Control for Logistics Transportation)

  • 유영준;강성훈;김주환;노성인;이기현;이승용;이철희
    • 드라이브 ㆍ 컨트롤
    • /
    • 제20권4호
    • /
    • pp.1-8
    • /
    • 2023
  • In recent years, robots have been utilized in various industries to reduce workload and enhance work efficiency. The following mobility offers users convenience by autonomously tracking specific locations and targets without the need for additional equipment such as forklifts or carts. In this paper, deep learning techniques were employed to recognize individuals and assign each of them a unique identifier to enable the recognition of a specific person even among multiple individuals. To achieve this, the distance and angle between the robot and the targeted individual are transmitted to respective controllers. Furthermore, this study explored the control methodology for mobility that tracks a specific person, utilizing Simultaneous Localization and Mapping (SLAM) and Proportional-Integral-Derivative (PID) control techniques. In the PID control method, a genetic algorithm is employed to extract the optimal gain value, subsequently evaluating PID performance through simulation. The SLAM method involves generating a map by synchronizing data from a 2D LiDAR and a depth camera using Real-Time Appearance-Based Mapping (RTAB-MAP). Experiments are conducted to compare and analyze the performance of the two control methods, visualizing the paths of both the human and the following mobility.

딥러닝 기반 제조 공장 내 AGV 객체 인식에 대한 연구 (Object Detection of AGV in Manufacturing Plants using Deep Learning)

  • 이길원;이활리;정희운
    • 한국정보통신학회논문지
    • /
    • 제25권1호
    • /
    • pp.36-43
    • /
    • 2021
  • 본 논문에서는 제조 공장 내 AGV (Automated Guided Vehicle) 주행 중 객체 인식을 위한 YOLO v3 알고리즘의 정확도에 대해 살펴보았다. 실험을 위해 2D LiDAR 및 스테레오 카메라가 장착된 AGV를 준비하였다. AGV 주행 중 2D LiDAR를 활용한 SLAM 기법으로 지도 정보를 획득하였고 스테레오 카메라를 활용한 객체 인식이 이루어졌다. 그리고 YOLO v3 알고리즘 기반의 학습 정도에 따른 재현율, AP, mAP 등을 측정하였다. 실험 결과, 4000장의 train data 와 500장의 test data 로 훈련된 YOLO v3 알고리즘에 AGV에 장착된 스테레오 카메라의 시점과 높이에서 획득한 1200장의 이미지를 추가로 학습할 경우 mAP가 약 10% 향상되었다. 정밀도(precision) 와 재현율 역시 각각 6.8%와 16.4% 향상되었다.