• Title/Summary/Keyword: 얼굴 회전

Search Result 121, Processing Time 0.023 seconds

Synthesizing Faces of Animation Characters Using a 3D Model (3차원 모델을 사용한 애니메이션 캐릭터 얼굴의 합성)

  • Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.31-40
    • /
    • 2012
  • In this paper, we propose a method of synthesizing faces of a user and an animation character using a 3D face model. The suggested method first receives two orthogonal 2D face images and extracts major features of the face through the template snake. It then generates a user-customized 3D face model by adjusting a generalized face model using the extracted facial features and by mapping texture maps obtained from two input images to the 3D face model. Finally, it generates a user-customized animation character by synthesizing the generated 3D model to an animation character reflecting the position, size, facial expressions, and rotational information of the character. Experimental results show some results to verify the performance of the suggested algorithm. We expect that our method will be useful to various applications such as games and animation movies.

A Face Detection using Pupil-Template from Color Base Image (컬러 기반 영상에서 눈동자 템플릿을 이용한 얼굴영상 추출)

  • Choi, Ji-Young;Kim, Mi-Kyung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.828-831
    • /
    • 2005
  • In this paper we propose a method to detect human faces from color image using pupil-template matching. Face detection is done by three stages. (i)separating skin regions from non-skin regions; (ii)generating a face regions by application of the best-fit ellipse; (iii)detecting face by pupil-template. Detecting skin regions is based on a skin color model. we generate a gray scale image from original image by the skin model. The gray scale image is segmented to separated skin regions from non-skin regions. Face region is generated by application of the best-fit ellipse is computed on the base of moments. Generated face regions are matched by pupil-template. And we detection face.

  • PDF

Approximate Front Face Image Detection Using Facial Feature Points (얼굴 특징점들을 이용한 근사 정면 얼굴 영상 검출)

  • Kim, Su-jin;Jeong, Yong-seok;Oh, Jeong-su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.675-678
    • /
    • 2018
  • Since the face has a unique property to identify human, the face recognition is actively used in a security area and an authentication area such as access control, criminal search, and CCTV. The frontal face image has the most face information. Therefore, it is necessary to acquire the front face image as much as possible for face recognition. In this study, the face region is detected using the Adaboost algorithm using Haar-like feature and tracks it using the mean-shifting algorithm. Then, the feature points of the facial elements such as the eyes and the mouth are extracted from the face region, and the ratio of the two eyes and degree of rotation of the face is calculated using their geographical information, and the approximate front face image is presented in real time.

  • PDF

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

Geometrical Feature-Based Detection of Pure Facial Regions (기하학적 특징에 기반한 순수 얼굴영역 검출기법)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.773-779
    • /
    • 2003
  • Locating exact position of facial components is a key preprocessing for realizing highly accurate and reliable face recognition schemes. In this paper, we propose a simple but powerful method for detecting isolated facial components such as eyebrows, eyes, and a mouth, which are horizontally oriented and have relatively dark gray levels. The method is based on the shape-resolving locally optimum thresholding that may guarantee isolated detection of each component. We show that pure facial regions can be determined by grouping facial features satisfying simple geometric constraints on unique facial structure. In the test for over 1000 images in the AR -face database, pure facial regions were detected correctly for each face image without wearing glasses. Very few errors occurred in the face images wearing glasses with a thick frame because of the occluded eyebrow -pairs. The proposed scheme may be best suited for the later stage of classification using either the mappings or a template matching, because of its capability of handling rotational and translational variations.

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Implement of Face Recognition using Short Time Face Train for User Access Management of Cafe and Restaurant (카페와 음식점의 사용자 출입 관리를 위한 단시간 얼굴 학습을 통한 얼굴 인식 시스템 구현)

  • Lee, Hyeopgeon;Yoo, Yeanjun;Hong, Seokmin;Hong, Du-pyo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.808-810
    • /
    • 2021
  • 얼굴 인식 기술들은 다양한 인공지능 플랫폼들의 발달 및 알고리즘들의 연구 개발로 인해 발전하고 있다. 대부분의 얼굴 인식 알고리즘들은 정확도를 높이기 위해 많은 양의 데이터 학습을 요구하고 잇다. 그러나 커피숍이나 음식점과 같이 사람들이 짧은 시간 머물고 있는 환경에서는 사람들의 출입 여부를 체크하기 기존의 얼굴인식 기술들을 적용함에 있어 학습량의 부족으로 부적합하다. 이에 본 논문에서 카페와 음식점의 사용자 출입 관리를 위한 단시간 얼굴 학습을 통한 얼굴 인식 시스템 구현한다. 이로 인해 카페 및 음식점의 운영자는 사용자들의 입·출입 데이터를 활용하여 테이블 회전율 분석 및 코로나19 방역수칙인 카페 2인 이상 1시간 이내 사용을 체크가 가능하다.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

A Study on Creation of 3D Facial Model Using Facial Image (임의의 얼굴 이미지를 이용한 3D 얼굴모델 생성에 관한 연구)

  • Lee, Hea-Jung;Joung, Suck-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.21-28
    • /
    • 2007
  • The facial modeling and animation technology had been studied in computer graphics field. The facial modeling technology is utilized much in virtual reality research purpose of MPEG-4 and so on and movie, advertisement, industry field of game and so on. Therefore, the development of 3D facial model that can do interaction with human is essential to little more realistic interface. We developed realistic and convenient 3D facial modeling system that using a optional facial image only. This system allows easily fitting to optional facial image by using the Korean standard facial model (generic model). So it generates intuitively 3D facial model as controling control points elastically after fitting control points on the generic model wire to the optional facial image. We can confirm and modify the 3D facial model by movement, magnify, reduce and turning. We experimented with 30 facial images of $630{\times}630$ sizes to verify usefulness of system that developed.

  • PDF