• Title/Summary/Keyword: RGB-D Sensor

Search Result 47, Processing Time 0.026 seconds

Design of ToF-Stereo Fusion Sensor System for 3D Spatial Scanning (3차원 공간 스캔을 위한 ToF-Stereo 융합 센서 시스템 설계)

  • Yun Ju Lee;Sun Kook Yoo
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.134-141
    • /
    • 2023
  • In this paper, we propose a ToF-Stereo fusion sensor system for 3D space scanning that increases the recognition rate of 3D objects, guarantees object detection quality, and is robust to the environment. The ToF-Stereo sensor fusion system uses a method of fusing the sensing values of the ToF sensor and the Stereo RGB sensor, and even if one sensor does not operate, the other sensor can be used to continuously detect an object. Since the quality of the ToF sensor and the Stereo RGB sensor varies depending on the sensing distance, sensing resolution, light reflectivity, and illuminance, a module that can adjust the function of the sensor based on reliability estimation is placed. The ToF-Stereo sensor fusion system combines the sensing values of the ToF sensor and the Stereo RGB sensor, estimates the reliability, and adjusts the function of the sensor according to the reliability to fuse the two sensing values, thereby improving the quality of the 3D space scan.

A Robot Localization based on RGB-D Sensor (RGB-D 센서 기반의 로봇 위치추정 기법 연구)

  • Seo, Yu-Hyeon;Lee, Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.872-875
    • /
    • 2014
  • 재난방지 및 구호에 사용되는 로봇의 주된 목적은 인간이 직접적인 접근을 할 수 없는 지역을 사전에 탐사하여 인간으로 하여금 올바른 판단을 돕기 위함에 있다. 하지만, 재난 지역에서는 통신장애 문제나, 육안 식별이 불가능한 상황, 원격조정을 통하여 로봇이 업무 수행에 상당한 제약을 받는다. 이 문제를 해결하기 위해 "LED-RGB 칼라센서를 이용한 상호위치 인식 방법연구"[1]을 수행하였으나, RGB의 인식거리가 상당히 짧고, 판단이 모호한 단점이 발생하였다. 따라서 본 연구에서는 이를 개선한 RGB-D센서를 이용하여 RGB의 인식거리를 증가시켰다. 또한 더욱 높은 정확성을 이용하기 위해, Depth를 사용하여 사물들의 특징점들을 랜드마크로 하고 랜드마크로부터의 상대위치를 파악하여 위치를 추정하는 방법을 제안하고자 한다. 마지막으로 상호인식 알고리즘을 이전 방식과 비교하고자 한다.

Planning of Safe and Efficient Local Path based on Path Prediction Using a RGB-D Sensor (RGB-D센서 기반의 경로 예측을 적용한 안전하고 효율적인 지역경로 계획)

  • Moon, Ji-Young;Chae, Hee-Won;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.2
    • /
    • pp.121-128
    • /
    • 2018
  • Obstacle avoidance is one of the most important parts of autonomous mobile robot. In this study, we proposed safe and efficient local path planning of robot for obstacle avoidance. The proposed method detects and tracks obstacles using the 3D depth information of an RGB-D sensor for path prediction. Based on the tracked information of obstacles, the paths of the obstacles are predicted with probability circle-based spatial search (PCSS) method and Gaussian modeling is performed to reduce uncertainty and to create the cost function of caution. The possibility of collision with the robot is considered through the predicted path of the obstacles, and a local path is generated. This enables safe and efficient navigation of the robot. The results in various experiments show that the proposed method enables robots to navigate safely and effectively.

Elevator Recognition and Position Estimation based on RGB-D Sensor for Safe Elevator Boarding (이동로봇의 안전한 엘리베이터 탑승을 위한 RGB-D 센서 기반의 엘리베이터 인식 및 위치추정)

  • Jang, Min-Gyung;Jo, Hyun-Jun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.70-76
    • /
    • 2020
  • Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.

Object tracking algorithm through RGB-D sensor in indoor environment (실내 환경에서 RGB-D 센서를 통한 객체 추적 알고리즘 제안)

  • Park, Jung-Tak;Lee, Sol;Park, Byung-Seo;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.248-249
    • /
    • 2022
  • In this paper, we propose a method for classifying and tracking objects based on information of multiple users obtained using RGB-D cameras. The 3D information and color information acquired through the RGB-D camera are acquired and information about each user is stored. We propose a user classification and location tracking algorithm in the entire image by calculating the similarity between users in the current frame and the previous frame through the information on the location and appearance of each user obtained from the entire image.

  • PDF

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

Noise Reduction Method Using Randomized Unscented Kalman Filter for RGB+D Camera Sensors (랜덤 무향 칼만 필터를 이용한 RGB+D 카메라 센서의 잡음 보정 기법)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.808-811
    • /
    • 2020
  • This paper proposes a method to minimize the error of the Kinect camera sensor by using a random undirected Kalman filter. Kinect cameras, which provide RGB values and depth information, cause nonlinear errors in the sensor, causing problems in various applications such as skeleton detection. Conventional methods have tried to remove errors by using various filtering techniques. However, there is a limit to removing nonlinear noise effectively. Therefore, in this paper, a randomized unscented Kalman filter was applied to predict and update the nonlinear noise characteristics, we next tried to enhance a performance of skeleton detection. The experimental results confirmed that the proposed method is superior to the conventional method in quantitative results and reconstructed images on 3D space.

Depth-hybrid speeded-up robust features (DH-SURF) for real-time RGB-D SLAM

  • Lee, Donghwa;Kim, Hyungjin;Jung, Sungwook;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.33-44
    • /
    • 2018
  • This paper presents a novel feature detection algorithm called depth-hybrid speeded-up robust features (DH-SURF) augmented by depth information in the speeded-up robust features (SURF) algorithm. In the keypoint detection part of classical SURF, the standard deviation of the Gaussian kernel is varied for its scale-invariance property, resulting in increased computational complexity. We propose a keypoint detection method with less variation of the standard deviation by using depth data from a red-green-blue depth (RGB-D) sensor. Our approach maintains a scale-invariance property while reducing computation time. An RGB-D simultaneous localization and mapping (SLAM) system uses a feature extraction method and depth data concurrently; thus, the system is well-suited for showing the performance of the DH-SURF method. DH-SURF was implemented on a central processing unit (CPU) and a graphics processing unit (GPU), respectively, and was validated through the real-time RGB-D SLAM.

Obstacle Avoidance of Indoor Mobile Robot using RGB-D Image Intensity (RGB-D 이미지 인텐시티를 이용한 실내 모바일 로봇 장애물 회피)

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.10
    • /
    • pp.35-42
    • /
    • 2014
  • It is possible to improve the obstacle avoidance capability by training and recognizing the obstacles which is in certain indoor environment. We propose the technique that use underlying intensity value along with intensity map from RGB-D image which is derived from stereo vision Kinect sensor and recognize an obstacle within constant distance. We test and experiment the accuracy and execution time of the pattern recognition algorithms like PCA, ICA, LDA, SVM to show the recognition possibility of it. From the comparison experiment between RGB-D data and intensity data, RGB-D data got 4.2% better accuracy rate than intensity data but intensity data got 29% and 31% faster than RGB-D in terms of training time and intensity data got 70% and 33% faster than RGB-D in terms of testing time for LDA and SVM, respectively. So, LDA, SVM have good accuracy and better training/testing time to use for obstacle avoidance based on intensity dataset of mobile robot.

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.