• Title, Summary, Keyword: 3-D pose estimation

Search Result 108, Processing Time 0.037 seconds

Combining Shape and SIFT Features for 3-D Object Detection and Pose Estimation (효과적인 3차원 객체 인식 및 자세 추정을 위한 외형 및 SIFT 특징 정보 결합 기법)

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.429-435
    • /
    • 2010
  • Three dimensional (3-D) object detection and pose estimation from a single view query image has been an important issue in various fields such as medical applications, robot vision, and manufacturing automation. However, most of the existing methods are not appropriate in a real time environment since object detection and pose estimation requires extensive information and computation. In this paper, we present a fast 3-D object detection and pose estimation scheme based on surrounding camera view-changed images of objects. Our scheme has two parts. First, we detect images similar to the query image from the database based on the shape feature, and calculate candidate poses. Second, we perform accurate pose estimation for the candidate poses using the scale invariant feature transform (SIFT) method. We earned out extensive experiments on our prototype system and achieved excellent performance, and we report some of the results.

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

An Indoor Pose Estimation System Based on Recognition of Circular Ring Patterns (원형 링 패턴 인식에 기반한 실내용 자세추정 시스템)

  • Kim, Heon-Hui;Ha, Yun-Su
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.4
    • /
    • pp.512-519
    • /
    • 2012
  • This paper proposes a 3-D pose (positions and orientations) estimation system based on the recognition of circular ring patterns. To deal with monocular vision-based pose estimation problem, we specially design a circular ring pattern that has a simplicity merit in view of object recognition. A pose estimation procedure is described in detail, which utilizes the geometric transformation of a circular ring pattern in 2-D perspective projection space. The proposed method is evaluated through the analysis of accuracy and precision with respect to 3-D pose estimation of a quadrotor-type vehicle in 3-D space.

3-D Pose Estimation of an Elliptic Object Using Two Coplanar Points (두 개의 공면점을 활용한 타원물체의 3차원 위치 및 자세 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.23-35
    • /
    • 2012
  • This paper presents a 3-D pose (position and orientation) estimation method for an elliptic object in 3-D space. It is difficult to resolve the problem of determining 3-D pose parameters with respect to an elliptic feature in 3-D space by interpretation of its projected feature onto an image plane. As an alternative, we propose a two points-based pose estimation algorithm to seek the 3-D information of an elliptic feature. The proposed algorithm determines a homogeneous transformation uniquely for a given correspondence set of an ellipse and two coplanar points that are defined on model and image plane, respectively. For each plane, two triangular features are extracted from an ellipse and two points based on the polarity in 2-D projection space. A planar homography is first estimated by the triangular feature correspondences, then decomposed into 3-D pose parameters. The proposed method is evaluated through a series of experiments for analyzing the errors of 3-D pose estimation and the sensitivity with respect to point locations.

Development of 3-Dimensional Pose Estimation Algorithm using Inertial Sensors for Humanoid Robot (관성 센서를 이용한 휴머노이드 로봇용 3축 자세 추정 알고리듬 개발)

  • Lee, Ah-Lam;Kim, Jung-Han
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.133-140
    • /
    • 2008
  • In this paper, a small and effective attitude estimation system for a humanoid robot was developed. Four small inertial sensors were packed and used for inertial measurements(3D accelerometer and three 1D gyroscopes.) An effective 3D pose estimation algorithm for low cost DSP using an extended Kalman filter was developed and evaluated. The 3D pose estimation algorithm has a very simple structure composed by 3 modules of a linear acceleration estimator, an external acceleration detector and an pseudo-accelerometer output estimator. The algorithm also has an effective switching structure based on probability and simple feedback loop for the extended Kalman filter. A special test equipment using linear motor for the testing of the 3D pose sensor was developed and the experimental results showed its very fast convergence to real values and effective responses. Popular DSP of TMS320F2812 was used to calculate robot's 3D attitude and translated acceleration, and the whole system were packed in a small size for humanoids robots. The output of the 3D sensors(pitch, roll, 3D linear acceleration, and 3D angular rate) can be transmitted to a humanoid robot at 200Hz frequency.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

2D - 3D Human Face Verification System based on Multiple RGB-D Camera using Head Pose Estimation (얼굴 포즈 추정을 이용한 다중 RGB-D 카메라 기반의 2D - 3D 얼굴 인증을 위한 시스템)

  • Kim, Jung-Min;Li, Shengzhe;Kim, Hak-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.4
    • /
    • pp.607-616
    • /
    • 2014
  • Face recognition is a big challenge in surveillance system since different rotation angles of the face make the difficulty to recognize the face of the same person. This paper proposes a novel method to recognize face with different head poses by using 3D information of the face. Firstly, head pose estimation (estimation of different head pose angles) is accomplished by the POSIT algorithm. Then, 3D face image data is constructed by using head pose estimation. After that, 2D image and the constructed 3D face matching is performed. Face verification is accomplished by using commercial face recognition SDK. Performance evaluation of the proposed method indicates that the error range of head pose estimation is below 10 degree and the matching rate is about 95%.

Shape Descriptor for 3D Foot Pose Estimation (3차원 발 자세 추정을 위한 새로운 형상 기술자)

  • Song, Ho-Geun;Kang, Ki-Hyun;Jung, Da-Woon;Yoon, Yong-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.469-478
    • /
    • 2010
  • This paper proposes the effective shape descriptor for 3D foot pose estimation. To reduce processing time, silhouette-based foot image database is built and meta information which involves the 3D pose of the foot is appended to the database. And we proposed a modified Centroid Contour Distance whose size of the feature space is small and performance of pose estimation is better than the others. In order to analyze performance of the descriptor, we evaluate time and spatial complexity with retrieval accuracy, and then compare with the previous methods. Experimental results show that the proposed descriptor is more effective than the previous methods on feature extraction time and pose estimation accuracy.

Fast Random-Forest-Based Human Pose Estimation Using a Multi-scale and Cascade Approach

  • Chang, Ju Yong;Nam, Seung Woo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.949-959
    • /
    • 2013
  • Since the recent launch of Microsoft Xbox Kinect, research on 3D human pose estimation has attracted a lot of attention in the computer vision community. Kinect shows impressive estimation accuracy and real-time performance on massive graphics processing unit hardware. In this paper, we focus on further reducing the computation complexity of the existing state-of-the-art method to make the real-time 3D human pose estimation functionality applicable to devices with lower computing power. As a result, we propose two simple approaches to speed up the random-forest-based human pose estimation method. In the original algorithm, the random forest classifier is applied to all pixels of the segmented human depth image. We first use a multi-scale approach to reduce the number of such calculations. Second, the complexity of the random forest classification itself is decreased by the proposed cascade approach. Experiment results for real data show that our method is effective and works in real time (30 fps) without any parallelization efforts.