• Title/Summary/Keyword: RGB-D Sensor

검색결과 47건 처리시간 0.024초

RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차 (Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors)

  • 김지환;정민국;송재복
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

지지내접원을 이용한 이동 로봇의 전복 지형 검출 기법 (Tip-over Terrain Detection Method based on the Support Inscribed Circle of a Mobile Robot)

  • 이성민;박정길;박재병
    • 제어로봇시스템학회논문지
    • /
    • 제20권10호
    • /
    • pp.1057-1062
    • /
    • 2014
  • This paper proposes a tip-over detection method for a mobile robot using a support inscribed circle defined as an inscribed circle of a support polygon. A support polygon defined by the contact points between the robot and the terrain is often used to analyze the tip-over. For a robot moving on uneven terrain, if the intersection between the extended line of gravity from the robot's COG and the terrain is inside the support polygon, tip-over will not occur. On the contrary, if the intersection is outside, tip-over will occur. The terrain is detected by using an RGB-D sensor. The terrain is locally modeled as a plane, and thus the normal vector can be obtained at each point on the terrain. The support polygon and the terrain's normal vector are used to detect tip-over. However, tip-over cannot be detected in advance since the support polygon is determined depending on the orientation of the robot. Thus, the support polygon is approximated as its inscribed circle to detect the tip-over regardless of the robot's orientation. To verify the effectiveness of the proposed method, the experiments are carried out using a 4-wheeled robot, ERP-42, with the Xtion RGB-D sensor.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Implicit Surface Representation of Three-Dimensional Face from Kinect Sensor

  • 수료 아드히 워보워;김은경;김성신
    • 한국지능시스템학회논문지
    • /
    • 제25권4호
    • /
    • pp.412-417
    • /
    • 2015
  • Kinect sensor has two output data which are produced from red green blue (RGB) sensor and depth sensor, it is called color image and depth map, respectively. Although this device's prices are cheapest than the other devices for three-dimensional (3D) reconstruction, we need extra work for reconstruct a smooth 3D data and also have semantic meaning. It happened because the depth map, which has been produced from depth sensor usually have a coarse and empty value. Consequently, it can be make artifact and holes on the surface, when we reconstruct it to 3D directly. In this paper, we present a method for solving this problem by using implicit surface representation. The key idea for represent implicit surface is by using radial basis function (RBF) and to avoid the trivial solution that the implicit function is zero everywhere, we need to defined on-surface point and off-surface point. Based on our simulation results using captured face as an input, we can produce smooth 3D face and fill the holes on the 3D face surface, since RBF is good for interpolation and holes filling. Modified anisotropic diffusion is used to produced smoothed surface.

RGB-D 이미지에서 인체 영역 검출을 위한 프레임워크 (A Framework for Human Body Parts Detection in RGB-D Image)

  • 홍성진;김명규
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1927-1935
    • /
    • 2016
  • This paper propose a framework for human body parts in RGB-D image. We conduct tasks of obtaining person area, finding candidate areas and local detection in order to detect hand, foot and head which have features of long accumulative geodesic distance. A person area is obtained with background subtraction and noise removal by using depth image which is robust to illumination change. Finding candidate areas performs construction of graph model which allows us to measure accumulative geodesic distance for the candidates. Instead of raw depth map, our approach constructs graph model with segmented regions by quadtree structure to improve searching time for the candidates. Local detection uses HOG based SVM for each parts, and head is detected for the first time. To minimize false detections for hand and foot parts, the candidates are classified with upper or lower body using the head position and properties of geodesic distance. Then, detect hand and foot with the local detectors. We evaluate our algorithm with datasets collected Kinect v2 sensor, and our approach shows good performance for head, hand and foot detection.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

황화수소 가스 감지를 위한 고성능 변색성 섬유형 센서의 제작 및 개발 (Fabrication of High-Performance Colorimetric Fiber-Type Sensors for Hydrogen Sulfide Detection)

  • 정동혁;맹보희;이준엽;조성빈;안희경;정대웅
    • 센서학회지
    • /
    • 제31권3호
    • /
    • pp.168-174
    • /
    • 2022
  • Hydrogen sulfide(H2S) gas is a high-risk gas that can cause suffocation or death in severe cases, depending on the concentration of exposure. Various studies to detect this gas are still in progress. In this study, we demonstrate a colorimetric sensor that can detect H2S gas using its direct color change. The proposed nanofiber sensor containing a dye material named Lead(II) acetate, which changes its color according to H2S gas reaction, is fabricated by electrospinning. The performance of this sensor is evaluated by measuring RGB changes, ΔE value, and gas selectivity. It has a ΔE value of 5.75 × 10-3 ΔE/s·ppm, showing improved sensitivity up to 1.4 times that of the existing H2S color change detection sensor, which is a result of the large surface area of the nanofibers. The selectivity for H2S gas is confirmed to be an excellent value of almost 70 %.

스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘 (Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm)

  • 김미경;차의영
    • 방송공학회논문지
    • /
    • 제23권5호
    • /
    • pp.598-605
    • /
    • 2018
  • 행동 인식은 데이터를 통해 인간의 행동을 인식하는 기술로서 비디오 감시 시스템을 통한 위험 행동과 같은 어플리케이션에 활용되어 질 수 있다. 기존의 행동 인식 알고리즘은 2차원 카메라를 통한 영상이나 다중모드 센서, 멀티 뷰와 같은 장비를 이용한 방법을 사용하거나 3D 장비를 이용하여 이루어져 왔다. 2차원 데이터를 사용한 경우 3차원 공간의 행위 인식에서는 가려짐과 같은 현상으로 낮은 인식율을 보였고 다른 방법은 복잡한 장비의 구성이나 고가의 추가적인 장비로 인한 어려움이 많았다. 본 논문은 RGB와 Depth 정보만을 이용하여 추가적인 장비 없이 CCTV 영상만으로 인간의 행동을 인식하는 방법을 제안한다. 먼저 RGB 영상에서 스켈레톤 추출 알고리즘을 적용하여 관절과 신체부위의 포인트를 추출한다. 이를 식을 적용하여 변위 벡터와 관계 벡터를 포함한 벡터로 변형한 후 RNN 모델을 통하여 연속된 벡터 데이터를 학습한다. 학습된 모델을 다양한 데이터 세트에 적용하여 행동 인식 정확도를 확인한 결과 2차원 정보만으로 3차원 정보를 이용한 기존의 알고리즘과 유사한 성능을 입증할 수 있었다.

라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘 (Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity)

  • 정상우;정민우;김아영
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.