• Title/Summary/Keyword: Head Pose Estimation

Search Result 42, Processing Time 0.029 seconds

Design of Robust Face Recognition System Realized with the Aid of Automatic Pose Estimation-based Classification and Preprocessing Networks Structure

  • Kim, Eun-Hu;Kim, Bong-Youn;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.6
    • /
    • pp.2388-2398
    • /
    • 2017
  • In this study, we propose a robust face recognition system to pose variations based on automatic pose estimation. Radial basis function neural network is applied as one of the functional components of the overall face recognition system. The proposed system consists of preprocessing and recognition modules to provide a solution to pose variation and high-dimensional pattern recognition problems. In the preprocessing part, principal component analysis (PCA) and 2-dimensional 2-directional PCA ($(2D)^2$ PCA) are applied. These functional modules are useful in reducing dimensionality of the feature space. The proposed RBFNNs architecture consists of three functional modules such as condition, conclusion and inference phase realized in terms of fuzzy "if-then" rules. In the condition phase of fuzzy rules, the input space is partitioned with the use of fuzzy clustering realized by the Fuzzy C-Means (FCM) algorithm. In conclusion phase of rules, the connections (weights) are realized through four types of polynomials such as constant, linear, quadratic and modified quadratic. The coefficients of the RBFNNs model are obtained by fuzzy inference method constituting the inference phase of fuzzy rules. The essential design parameters (such as the number of nodes, and fuzzification coefficient) of the networks are optimized with the aid of Particle Swarm Optimization (PSO). Experimental results completed on standard face database -Honda/UCSD, Cambridge Head pose, and IC&CI databases demonstrate the effectiveness and efficiency of face recognition system compared with other studies.

Robot Posture Estimation Using Circular Image of Inner-Pipe (원형관로 영상을 이용한 관로주행 로봇의 자세 추정)

  • Yoon, Ji-Sup;Kang , E-Sok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.6
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Multi-Scale Deconvolution Head Network for Human Pose Estimation (인체 자세 추정을 위한 다중 해상도 디컨볼루션 출력망)

  • Kang, Won Jun;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.68-71
    • /
    • 2020
  • 최근 딥러닝을 이용한 인체 자세 추정(human pose estimation) 연구가 활발히 진행되고 있다. 그 중 구조가 간단하면서도 성능이 강력하여 널리 사용되고 있는 딥러닝 네트워크 모델은 이미지 분류(image classification)에 사용되는 백본 네트워크(backbone network)와 디컨볼루션 출력망(deconvolution head network)을 이어 붙인 구조를 갖는다[1]. 기존의 디컨볼루션 출력망은 디컨볼루션 층을 쌓아 낮은 해상도의 특징맵을 모두 높은 해상도로 변환한 후 최종 인체 자세 추정을 하는데 이는 다양한 해상도에서 얻어낸 특징들을 골고루 활용하기 힘들다는 단점이 있다. 따라서 본 논문에서는 매 디컨볼루션 층 이후에 인체 자세 추정을 하여 다양한 해상도에서 연산을 하고 이를 종합하여 최종 인체 자세 추정을 하는 방법을 제안한다. 실험 결과 Res50 과 기존의 디컨볼루션 출력망의 경우 0.717 AP 를 얻었는데 Res101 과 기존의 디컨볼루션 출력망을 사용한 결과 50% 이상의 파라미터 수 증가와 함께 0.727 AP, 즉 0.010AP 의 성능 향상이 이루어졌다. 이에 반해 Res50 에 다중 해상도 디컨볼루션 출력망을 사용한 결과 약 1%의 파라미터 수 증가 만으로 0.720 AP, 즉 0.003 AP 의 성능 향상이 이루어졌다. 이를 통해 디컨볼루션 출력망 구조를 개선하면 매우 적은 파라미터 수 증가 만으로도 인체 자세 추정의 성능을 효과적으로 향상시킬 수 있음을 확인하였다.

  • PDF

Robot Posture Estimation Using Inner-Pipe Image

  • Sup, Yoon-Ji;Sok, Kang-E
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.173.1-173
    • /
    • 2001
  • This paper proposes the methodology in image processing algorithm that estimates the pose of the pipe crawling robot. The pipe crawling robots are usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light varies with the robot posture. The algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Head Orientation-based Gaze Tracking (얼굴의 움직임을 이용한 응시점 추적)

  • ;R.S. Ramakrishna
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.401-403
    • /
    • 1999
  • 본 논문에서 우리는 제약이 없는 배경화면에서 얼굴의 움직임을 이용한 응시점 추적을 위해 얼굴의 특징점(눈, 코, 그리고 입)들을 찾고 head orientation을 구하는 효?거이고 빠른 방법을 제안한다. 얼굴을 찾는 방법이 많이 연구 되어 오고 있으나 많은 부분이 효과적이지 못하거나 제한적인 사항을 필요로 한다. 본 논문에서 제안한 방법은 이진화된 이미지에 기초하고 완전 그래프 매칭을 이용한 유사성을 구하는 방법이다. 즉, 임의의 임계치 값에 의해 이진화된 이미지를 레이블링 한 후 각 쌍의 블록에 대한 유사성을 구한다. 이때 두 눈과 가장 유사성을 갖는 두 블록을 눈으로 선택한다. 눈을 찾은 후 입과 코를 찾아간다. 360$\times$240 이미지의 평균 처리 속도는 0.2초 이내이고 다음 탐색영역을 예상하여 탐색 영역을 줄일 경우 평균 처리속도는 0.15초 이내였다. 그리고 본 논문에서는 얼굴의 움직임을 구하기 위해 각 특징점들이 이루는 각을 기준으로 한 템플릿 매칭을 이용했다. 실험은 다양한 조명환경과 여러 사용자를 대상으로 이루어졌고 속도와 정확성면에서 좋은 결과를 보였다. 도한, 명안정보만을 사용하므로 흑백가메라에서도 사용가능하여 경제적 효과도 기대할 수 있다.

  • PDF

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

Human Body Tracking and Pose Estimation Using CamShift Based on Kalman Filter and Weighted Search Windows (칼만 필터와 가중탐색영역 CAMShift를 이용한 휴먼 바디 트래킹 및 자세추정)

  • Min, Jae-Hong;Kim, In-Gyu;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.3
    • /
    • pp.545-552
    • /
    • 2012
  • In this paper, we propose Modified Multi CAMShift Algorithm based on Kalman filter and Weighted Search Windows(KWMCAMShift) that extracts skin color area and tracks several human body parts for real-time human tracking system. We propose modified CAMShift algorithm that generates background model, extracts skin area of hands and head, and tracks the body parts. Kalman filter stabilizes tracking search window of skin area due to changing skin area in consecutive frames. Each occlusion areas is avoided by using weighted window of non-search areas and main-search area. And shadows are eliminated from background model and intensity of shadow. The proposed KWMCAMShift algorithm can estimate human pose in real-time and achieves 96.82% accuracy even in the case of occlusions.