• Title/Summary/Keyword: Face Estimation with Template Matching

Search Result 2, Processing Time 0.018 seconds

DSP based real-time ATM security system (DSP 기반 실시간 ATM 보안 시스템)

  • Lee, Tae-Min;Kim, Yong-Guk
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.654-658
    • /
    • 2008
  • 은행에서 현금 지급기(ATM)를 부정한 목적으로 사용하는 사람들은 보통 마스크나 선글라스, 모자 같은 것으로 얼굴을 은폐하고 인출을 하는 경우가 많다. 그렇기 때문에 마스크나 선글라스, 모자로 얼굴을 가림으로써 특징을 검출하기 쉽지 않아 얼굴 인식을 통한 사람 판단이 어렵다. 본 논문에서는 차 영상과 Template Matching 을 통해 얼굴 영역을 추출하고 Adaptive Boost 를 통해 얼굴의 특징 점을 검출한 후 스킨 컬러 정보를 이용하여 현재 사람의 은폐 정보를 추정하는 방법을 제안한다. 제안된 방영은 영상신호처리에 강하고 비용이 적으며 적은 전력으로 동작하는 DSP 기반에 탑재 함으로써 ATM 기에 탑재하기 적합하고 또한 다른 형태의 검증 시스템에 적용할 수 있는 효율적인 구조를 제시한다.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.