• Title/Summary/Keyword: 6D pose estimation

Search Result 23, Processing Time 0.022 seconds

Multi-view Semi-supervised Learning-based 3D Human Pose Estimation (다시점 준지도 학습 기반 3차원 휴먼 자세 추정)

  • Kim, Do Yeop;Chang, Ju Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.174-184
    • /
    • 2022
  • 3D human pose estimation models can be classified into a multi-view model and a single-view model. In general, the multi-view model shows superior pose estimation performance compared to the single-view model. In the case of the single-view model, the improvement of the 3D pose estimation performance requires a large amount of training data. However, it is not easy to obtain annotations for training 3D pose estimation models. To address this problem, we propose a method to generate pseudo ground-truths of multi-view human pose data from a multi-view model and exploit the resultant pseudo ground-truths to train a single-view model. In addition, we propose a multi-view consistency loss function that considers the consistency of poses estimated from multi-view images, showing that the proposed loss helps the effective training of single-view models. Experiments using Human3.6M and MPI-INF-3DHP datasets show that the proposed method is effective for training single-view 3D human pose estimation models.

Fast Random-Forest-Based Human Pose Estimation Using a Multi-scale and Cascade Approach

  • Chang, Ju Yong;Nam, Seung Woo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.949-959
    • /
    • 2013
  • Since the recent launch of Microsoft Xbox Kinect, research on 3D human pose estimation has attracted a lot of attention in the computer vision community. Kinect shows impressive estimation accuracy and real-time performance on massive graphics processing unit hardware. In this paper, we focus on further reducing the computation complexity of the existing state-of-the-art method to make the real-time 3D human pose estimation functionality applicable to devices with lower computing power. As a result, we propose two simple approaches to speed up the random-forest-based human pose estimation method. In the original algorithm, the random forest classifier is applied to all pixels of the segmented human depth image. We first use a multi-scale approach to reduce the number of such calculations. Second, the complexity of the random forest classification itself is decreased by the proposed cascade approach. Experiment results for real data show that our method is effective and works in real time (30 fps) without any parallelization efforts.

Fast Hand Pose Estimation with Keypoint Detection and Annoy Tree (Keypoint Detection과 Annoy Tree를 사용한 2D Hand Pose Estimation)

  • Lee, Hui-Jae;Kang Min-Hye
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.277-278
    • /
    • 2021
  • 최근 손동작 인식에 대한 연구들이 활발하다. 하지만 대부분 Depth 정보를 포함한3D 정보를 필요로 한다. 이는 기존 연구들이 Depth 카메라 없이는 동작하지 않는다는 한계점이 있다는 것을 의미한다. 본 프로젝트는 Depth 카메라를 사용하지 않고 2D 이미지에서 Hand Keypoint Detection을 통해 손동작 인식을 하는 방법론을 제안한다. 학습 데이터 셋으로 Facebook에서 제공하는 InterHand2.6M 데이터셋[1]을 사용한다. 제안 방법은 크게 두 단계로 진행된다. 첫째로, Object Detection으로 Hand Detection을 수행한다. 데이터 셋이 어두운 배경에서 촬영되어 실 사용 환경에서 Detection 성능이 나오지 않는 점을 해결하기 위한 이미지 합성 Augmentation 기법을 제안한다. 둘째로, Keypoint Detection으로 21개의 Hand Keypoint들을 얻는다. 실험을 통해 유의미한 벡터들을 생성한 뒤 Annoy (Approximate nearest neighbors Oh Yeah) Tree를 생성한다. 생성된 Annoy Tree들로 후처리 작업을 거친 뒤 최종 Pose Estimation을 완료한다. Annoy Tree를 사용한 Pose Estimation에서는 NN(Neural Network)을 사용한 것보다 빠르며 동등한 성능을 냈다.

  • PDF

A Study on 6D Pose Estimation Method Using Industrial Robot and 2D Vision (산업용 로봇과 2D 비전을 연동한 6D 자세 추정 방법 연구)

  • Yang-Su Jang;Kyung-Bae Jang
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.5
    • /
    • pp.19-26
    • /
    • 2024
  • This study presents and verifies an easy, fast, and relatively cost-effective method for 6D pose estimation using industrial robots for bin picking in the manufacturing sector. Specifically, it details a method involving the integration of industrial robots with 2D cameras to ① acquire multi-view images of objects and collect training data, ② select variables from the collected data and implement a linear regression model, and ③ apply the trained model to estimate, verify, and evaluate the 6D pose of objects on industrial robots. The proposed data collection method and implemented linear regression model demonstrated statistically significant results. The estimated 6D poses were validated against ground true values and evaluated in their application to industrial robots, confirming their validity. By using feature point information extracted from images instead of direct image inputs as inputs to the regression model, the data size was reduced, enabling direct embedding on the robot. This research approaches the problem of spatial coordinates in 3D from a data analysis perspective, rather than from geometrical or computer vision perspectives.

Hard Example Generation by Novel View Synthesis for 3-D Pose Estimation (3차원 자세 추정 기법의 성능 향상을 위한 임의 시점 합성 기반의 고난도 예제 생성)

  • Minji Kim;Sungchan Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.9-17
    • /
    • 2024
  • It is widely recognized that for 3D human pose estimation (HPE), dataset acquisition is expensive and the effectiveness of augmentation techniques of conventional visual recognition tasks is limited. We address these difficulties by presenting a simple but effective method that augments input images in terms of viewpoints when training a 3D human pose estimation (HPE) model. Our intuition is that meaningful variants of the input images for HPE could be obtained by viewing a human instance in the images from an arbitrary viewpoint different from that in the original images. The core idea is to synthesize new images that have self-occlusion and thus are difficult to predict at different viewpoints even with the same pose of the original example. We incorporate this idea into the training procedure of the 3D HPE model as an augmentation stage of the input samples. We show that a strategy for augmenting the synthesized example should be carefully designed in terms of the frequency of performing the augmentation and the selection of viewpoints for synthesizing the samples. To this end, we propose a new metric to measure the prediction difficulty of input images for 3D HPE in terms of the distance between corresponding keypoints on both sides of a human body. Extensive exploration of the space of augmentation probability choices and example selection according to the proposed distance metric leads to a performance gain of up to 6.2% on Human3.6M, the well-known pose estimation dataset.

A Method for 3D Human Pose Estimation based on 2D Keypoint Detection using RGB-D information (RGB-D 정보를 이용한 2차원 키포인트 탐지 기반 3차원 인간 자세 추정 방법)

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.41-51
    • /
    • 2018
  • Recently, in the field of video surveillance, deep learning based learning method is applied to intelligent video surveillance system, and various events such as crime, fire, and abnormal phenomenon can be robustly detected. However, since occlusion occurs due to the loss of 3d information generated by projecting the 3d real-world in 2d image, it is need to consider the occlusion problem in order to accurately detect the object and to estimate the pose. Therefore, in this paper, we detect moving objects by solving the occlusion problem of object detection process by adding depth information to existing RGB information. Then, using the convolution neural network in the detected region, the positions of the 14 keypoints of the human joint region can be predicted. Finally, in order to solve the self-occlusion problem occurring in the pose estimation process, the method for 3d human pose estimation is described by extending the range of estimation to the 3d space using the predicted result of 2d keypoint and the deep neural network. In the future, the result of 2d and 3d pose estimation of this research can be used as easy data for future human behavior recognition and contribute to the development of industrial technology.

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System (다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법)

  • Ji Dong Choi;Min Young Kim;Byeong Hak Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

A New Head Pose Estimation Method based on Boosted 3-D PCA (새로운 Boosted 3-D PCA 기반 Head Pose Estimation 방법)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.105-109
    • /
    • 2021
  • In this paper, we evaluate Boosted 3-D PCA as a Dataset and evaluate its performance. After that, we will analyze the network features and performance. In this paper, the learning was performed using the 300W-LP data set using the same learning method as Boosted 3-D PCA, and the evaluation was evaluated using the AFLW2000 data set. The results show that the performance is similar to that of the Boosted 3-D PCA paper. This performance result can be learned using the data set of face images freely than the existing Landmark-to-Pose method, so that the poses can be accurately predicted in real-world situations. Since the optimization of the set of key points is not independent, we confirmed the manual that can reduce the computation time. This analysis is expected to be a very important resource for improving the performance of network boosted 3-D PCA or applying it to various application domains.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.