• Title/Summary/Keyword: 3D facial features

Search Result 70, Processing Time 0.022 seconds

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Robust 3D Facial Landmark Detection Using Angular Partitioned Spin Images (각 분할 스핀 영상을 사용한 3차원 얼굴 특징점 검출 방법)

  • Kim, Dong-Hyun;Choi, Kang-Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.199-207
    • /
    • 2013
  • Spin images representing efficiently surface features of 3D mesh models have been used to detect facial landmark points. However, at a certain point, different normal direction can lead to quite different spin images. Moreover, since 3D points are projected to the 2D (${\alpha}-{\beta}$) space during spin image generation, surface features cannot be described clearly. In this paper, we present a method to detect 3D facial landmark using improved spin images by partitioning the search area with respect to angle. By generating sub-spin images for angular partitioned 3D spaces, more unique features describing corresponding surfaces can be obtained, and improve the performance of landmark detection. In order to generate spin images robust to inaccurate surface normal direction, we utilize on averaging surface normal with its neighboring normal vectors. The experimental results show that the proposed method increases the accuracy in landmark detection by about 34% over a conventional method.

3D Head Pose Estimation Using The Stereo Image (스테레오 영상을 이용한 3차원 포즈 추정)

  • 양욱일;송환종;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

3-D Facial Motion Estimation using Extended Kalman Filter (확장 칼만 필터를 이용한 얼굴의 3차원 움직임량 추정)

  • 한승철;박강령김재희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.883-886
    • /
    • 1998
  • In order to detect the user's gaze position on a monitor by computer vision, the accurate estimations of 3D positions and 3D motion of facial features are required. In this paper, we apply a EKF(Extended Kalman Filter) to estimate 3D motion estimates and assumes that its motion is "smooth" in the sense of being represented as constant velocity translational and rotational model. Rotational motion is defined about the orgin of an face-centered coordinate system, while translational motion is defined about that of a camera centered coordinate system. For the experiments, we use the 3D facial motion data generated by computer simulation. Experiment results show that the simulation data andthe estimation results of EKF are similar.e similar.

  • PDF

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Global Feature Extraction and Recognition from Matrices of Gabor Feature Faces

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.207-211
    • /
    • 2011
  • This paper presents a method for facial feature representation and recognition from the Covariance Matrices of the Gabor-filtered images. Gabor filters are a very powerful tool for processing images that respond to different local orientations and wave numbers around points of interest, especially on the local features on the face. This is a very unique attribute needed to extract special features around the facial components like eyebrows, eyes, mouth and nose. The Covariance matrices computed on Gabor filtered faces are adopted as the feature representation for face recognition. Geodesic distance measure is used as a matching measure and is preferred for its global consistency over other methods. Geodesic measure takes into consideration the position of the data points in addition to the geometric structure of given face images. The proposed method is invariant and robust under rotation, pose, or boundary distortion. Tests run on random images and also on publicly available JAFFE and FRAV3D face recognition databases provide impressively high percentage of recognition.

Analogical Face Generation based on Feature Points

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk
    • Journal of Multimedia Information System
    • /
    • v.6 no.1
    • /
    • pp.15-22
    • /
    • 2019
  • There are many ways to perform face recognition. The first step of face recognition is the face detection step. If the face is not found in the first step, the face recognition fails. Face detection research has many difficulties because it can be varied according to face size change, left and right rotation and up and down rotation, side face and front face, facial expression, and light condition. In this study, facial features are extracted and the extracted features are geometrically reconstructed in order to improve face recognition rate in extracted face region. Also, it is aimed to adjust face angle using reconstructed facial feature vector, and to improve recognition rate for each face angle. In the recognition attempt using the result after the geometric reconstruction, both the up and down and the left and right facial angles have improved recognition performance.

Symmetric Shape Deformation Considering Facial Features and Attractiveness Improvement (얼굴 특징을 고려한 대칭적인 형상 변형과 호감도 향상)

  • Kim, Jeong-Sik;Shin, Il-Kyu;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.2
    • /
    • pp.29-37
    • /
    • 2010
  • In this paper, we present a novel deformation method for alleviating the asymmetry of a scanned 3D face considering facial features. To handle detailed areas of the face, we developed a new local 3D shape descriptor based on facial features and surface curvatures. Our shape descriptor can improve the accuracy when deforming a 3D face toward a symmetric configuration, because it provides accurate point pairing with respect to the plane of symmetry. In addition, we use point-based representation over all stages of symmetrization, which makes it much easier to support discrete processes. Finally, we performed a statistical analysis to assess subjects' preference for the symmetrized faces by our approach.