• Title/Summary/Keyword: cloud robotics

Search Result 57, Processing Time 0.023 seconds

Elevator Recognition and Position Estimation based on RGB-D Sensor for Safe Elevator Boarding (이동로봇의 안전한 엘리베이터 탑승을 위한 RGB-D 센서 기반의 엘리베이터 인식 및 위치추정)

  • Jang, Min-Gyung;Jo, Hyun-Jun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.70-76
    • /
    • 2020
  • Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.

Development of Drone Racing Simulator using SLAM Technology and Reconstruction of Simulated Environments (SLAM 기술을 활용한 가상 환경 복원 및 드론 레이싱 시뮬레이션 제작)

  • Park, Yonghee;Yu, Seunghyun;Lee, Jaegwang;Jeong, Jonghyeon;Jo, Junhyeong;Kim, Soyeon;Oh, Hyejun;Moon, Hyungpil
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.245-249
    • /
    • 2021
  • In this paper, we present novel simulation contents for drone racing and autonomous flight of drone. With Depth camera and SLAM, we conducted mapping 3 dimensional environment through RTAB-map. The 3 dimensional map is represented by point cloud data. After that we recovered this data in Unreal Engine. This recovered raw data reflects real data that includes noise and outlier. Also we built drone racing contents like gate and obstacles for evaluating drone flight in Unreal Engine. Then we implemented both HITL and SITL by using AirSim which offers flight controller and ROS api. Finally we show autonomous flight of drone with ROS and AirSim. Drone can fly in real place and sensor property so drone experiences real flight even in the simulation world. Our simulation framework increases practicality than other common simulation that ignore real environment and sensor.

LiDAR-based Mapping Considering Laser Reflectivity in Indoor Environments (실내 환경에서의 레이저 반사도를 고려한 라이다 기반 지도 작성)

  • Roun Lee;Jeonghong Park;Seonghun Hong
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.135-142
    • /
    • 2023
  • Light detection and ranging (LiDAR) sensors have been most widely used in terrestrial robotic applications because they can provide dense and precise measurements of the surrounding environments. However, the reliability of LiDAR measurements can considerably vary due to the different reflectivities of laser beams to the reflecting surface materials. This study presents a robust LiDAR-based mapping method for the varying laser reflectivities in indoor environments using the framework of simultaneous localization and mapping (SLAM). The proposed method can minimize the performance degradations in the SLAM accuracy by checking and discarding potentially unreliable LiDAR measurements in the SLAM front-end process. The gaps in point-cloud maps created by the proposed approach are filled by a Gaussian process regression method. Experimental results with a mobile robot platform in an indoor environment are presented to validate the effectiveness of the proposed methodology.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

3D Multi-floor Precision Mapping and Localization for Indoor Autonomous Robots (실내 자율주행 로봇을 위한 3차원 다층 정밀 지도 구축 및 위치 추정 알고리즘)

  • Kang, Gyuree;Lee, Daegyu;Shim, Hyunchul
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.25-31
    • /
    • 2022
  • Moving among multiple floors is one of the most challenging tasks for indoor autonomous robots. Most of the previous researches for indoor mapping and localization have focused on singular floor environment. In this paper, we present an algorithm that creates a multi-floor map using 3D point cloud. We implement localization within the multi-floor map using a LiDAR and an IMU. Our algorithm builds a multi-floor map by constructing a single-floor map using a LOAM-based algorithm, and stacking them through global registration that aligns the common sections in the map of each floor. The localization in the multi-floor map was performed by adding the height information to the NDT (Normal Distribution Transform)-based registration method. The mean error of the multi-floor map showed 0.29 m and 0.43 m errors in the x, and y-axis, respectively. In addition, the mean error of yaw was 1.00°, and the error rate of height was 0.063. The real-world test for localization was performed on the third floor. It showed the mean square error of 0.116 m, and the average differential time of 0.01 sec. This study will be able to help indoor autonomous robots to operate on multiple floors.

Object Pose Estimation and Motion Planning for Service Automation System (서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획)

  • Youngwoo Kwon;Dongyoung Lee;Hosun Kang;Jiwook Choi;Inho Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

Improved LiDAR-Camera Calibration Using Marker Detection Based on 3D Plane Extraction

  • Yoo, Joong-Sun;Kim, Do-Hyeong;Kim, Gon-Woo
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2530-2544
    • /
    • 2018
  • In this paper, we propose an enhanced LiDAR-camera calibration method that extracts the marker plane from 3D point cloud information. In previous work, we estimated the straight line of each board to obtain the vertex. However, the errors in the point information in relation to the z axis were not considered. These errors are caused by the effects of user selection on the board border. Because of the nature of LiDAR, the point information is separated in the horizontal direction, causing the approximated model of the straight line to be erroneous. In the proposed work, we obtain each vertex by estimating a rectangle from a plane rather than obtaining a point from each straight line in order to obtain a vertex more precisely than the previous study. The advantage of using planes is that it is easier to select the area, and the most point information on the board is available. We demonstrated through experiments that the proposed method could be used to obtain more accurate results compared to the performance of the previous method.

Performance Improvement for Tracking Small Targets (고기동 표적 추적 성능 개선을 위한 연구)

  • Jung, Yun-Sik;Kim, Kyung-Su;Song, Taek-Lyul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1044-1052
    • /
    • 2010
  • In this paper, a new realtime algorithm called the RTPBTD-HPDAF (Recursive Temporal Profile Base Target Detection with Highest Probability Data Association Filter) is presented for tracking fast moving small targets with IIR (Imaging Infrared) sensor systems. Spatial filter algorithms are mainly used for target in IIR sensor system detection and tracking however they often generate high density clutter due to various shapes of cloud. The TPBTD (Temporal Profile Base Target Detection) algorithm based on the analysis of temporal behavior of individual pixels is known to have good performance for detection and tracking of fast moving target with suppressing clutter. However it is not suitable to detect stationary and abruptly maneuvering targets. Moreover its computational load may not be negligible. The PTPBTD-HPDAF algorithm proposed in this paper for real-time target detection and tracking is shown to be computationally cheap while it has benefit of tracking targets with abrupt maneuvers. The performance of the proposed RTPBTD-HPDAF algorithm is tested and compared with the spatial filter with HPDAF algorithm for run-time and track initiation at real IIR video.

LiDAR Image Segmentation using Convolutional Neural Network Model with Refinement Modules (정제 모듈을 포함한 컨볼루셔널 뉴럴 네트워크 모델을 이용한 라이다 영상의 분할)

  • Park, Byungjae;Seo, Beom-Su;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.8-15
    • /
    • 2018
  • This paper proposes a convolutional neural network model for distinguishing areas occupied by obstacles from a LiDAR image converted from a 3D point cloud. The channels of a LiDAR image used as input consist of the distances to 3D points, the reflectivities of 3D points, and the heights of 3D points from the ground. The proposed model uses a LiDAR image as an input and outputs a result of a segmented LiDAR image. The proposed model adopts refinement modules with skip connections to segment a LiDAR image. The refinement modules with skip connections in the proposed model make it possible to construct a complex structure with a small number of parameters than a convolutional neural network model with a linear structure. Using the proposed model, it is possible to distinguish areas in a LiDAR image occupied by obstacles such as vehicles, pedestrians, and bicyclists. The proposed model can be applied to recognize surrounding obstacles and to search for safe paths.

Multi-facet 3D Scanner Based on Stripe Laser Light Image (선형 레이저 광 영상기반 다면 3 차원 스캐너)

  • Ko, Young-Jun;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.811-816
    • /
    • 2016
  • In light of recently developed 3D printers for rapid prototyping, there is increasing attention on the 3D scanner as a 3D data acquisition system for an existing object. This paper presents a prototypical 3D scanner based on a striped laser light image. In order to solve the problem of shadowy areas, the proposed 3D scanner has two cameras with one laser light source. By using a horizontal rotation table and a rotational arm rotating about the latitudinal axis, the scanner is able to scan in all directions. To remove an additional optical filter for laser light pixel extraction of an image, we have adopted a differential image method with laser light modulation. Experimental results show that the scanner's 3D data acquisition performance exhibited less than 0.2 mm of measurement error. Therefore, this scanner has proven that it is possible to reconstruct an object's 3D surface from point cloud data using a 3D scanner, enabling reproduction of the object using a commercially available 3D printer.