• 제목/요약/키워드: facial image

검색결과 825건 처리시간 0.027초

동영상에서 얼굴의 주색상 밝기 분포를 이용한 실시간 얼굴영역 검출기법 (Using Analysis of Major Color Component facial region detection algorithm for real-time image)

  • 최미영;김계영;최형일
    • 디지털콘텐츠학회 논문지
    • /
    • 제8권3호
    • /
    • pp.329-339
    • /
    • 2007
  • 본 논문은 연속적으로 입력되는 동영상에서 시공간 정보를 이용하여 다양한 조명환경에서도 실시간 적용이 가능한 얼굴영역 검출기법을 제안한다. 제안한 알고리즘은 연속된 두개의 연속 영상에서 에지 차영상을 구하고 연속적으로 입력되는 영상과의 차분 누적영상을 통해 초기 얼굴영역을 검출한다. 초기 얼굴영역으로부터 외부 조명의 영향을 없애기 위해, 검출된 초기 얼굴영역의 수평 프로파일을 이용하여 수직 방향으로 객체영역을 이분하며, 각각의 객체영역에 관해 주색상 밝기를 구한다. 배경과 잡음 성분을 제거한 후, 분할된 얼굴영역을 통합한 주색상 밝기 분포를 이용하여 타원으로 근사화 함으로써 정확한 얼굴의 기울기와 영역을 실시간으로 계산한다. 제안된 방법은 다양한 조명조건에서 얻어진 동영상을 이용하여 실험되었으며 얼굴의 좌 우 기울기가 $30^{\circ}$이하에서 우수한 얼굴영역 검출 성능을 보였다.

  • PDF

거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링 (Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling)

  • 경규민;박민용;현창호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적 (A Gaze Tracking based on the Head Pose in Computer Monitor)

  • 오승환;이희영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

KPCA 기반 노이즈 제거 기법을 이용한 부분 손상된 얼굴 영상의 복원 (Reconstruction of Partially Occluded Facial Image Utilizing KPCA-based Denoising Method)

  • 강대성;김종호;박주영
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2005년도 춘계학술대회 학술발표 논문집 제15권 제1호
    • /
    • pp.247-250
    • /
    • 2005
  • 많은 경우, 부분 손상된 얼굴 영상을 복원해야 할 필요가 있다. 대표적인 예로는 감시 카메라에 찍힌 범인의 얼굴 영상이 이에 속한다. 이런 경우 얼굴의 중요한 부분이 가려져 있기 때문에 자동 얼굴 인식 시스템이나 사람의 관찰로는 그 부분을 인식하기는 매우 어렵다. 이 논문에서는 그 어려움을 극복하기 위해 Kernel PCA 기반 노이즈 제거 기법을 부분 손상된 얼굴 영상에 적용한 문제를 고려해 보았다.

  • PDF

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • 제3권2호
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

한 장의 포토기반 실사 수준 얼굴 애니메이션 (A photo-based realistic facial animation)

  • 김재환;정일권
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2011년도 춘계 종합학술대회 논문집
    • /
    • pp.51-52
    • /
    • 2011
  • We introduce a novel complete framework for contructing realistic facial animations given just one facial photo as an input in this paper. Our approach is carried on in 2D photo spacem not 3D space. Moreover, we utilize computer vision-based technique (digital matting) as well as conventional image processing methods (image warping and texture synthesis) for expressing more realistic facial animations. Simulated results show that our scheme produces high quality facial animations very efficiently.

  • PDF

퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템 (Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing)

  • 김대진;김종성;변증남
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

안면 비대칭 환자의 하악골 수술 후 하악골 변화에 대한 3차원 CT 영상 비교 (Comparision of Mandible Changes on Three-Dimensional Computed Tomography image After Mandibular Surgery in Facial Asymmetry Patients)

  • 김미령;진병로
    • Journal of Yeungnam Medical Science
    • /
    • 제25권2호
    • /
    • pp.108-116
    • /
    • 2008
  • Background : When surgeons plan mandible ortho surgery for patients with skeletal class III facial asymmetry, they must be consider the exact method of surgery for correction of the facial asymmetry. Three-dimensional (3D) CT imaging is efficient in depicting specific structures in the craniofacial area. It reproduces actual measurements by minimizing errors from patient movement and allows for image magnification. Due to the rapid development of digital image technology and the expansion of treatment range, rapid progress has been made in the study of three-dimensional facial skeleton analysis. The purpose of this study was to conduct 3D CT image comparisons of mandible changes after mandibular surgery in facial asymmetry patients. Materials & methods : This study included 7 patients who underwent 3D CT before and after correction of facial asymmetry in the oral and maxillofacial surgery department of Yeungnam University Hospital between August 2002 and November 2005. Patients included 2 males and 5 females, with ages ranging from 16 years to 30 years (average 21.4 years). Frontal CT images were obtained before and after surgery, and changes in mandible angle and length were measured. Results : When we compared the measurements obtained before and after mandibular surgery in facial asymmetry patients, correction of facial asymmetry was identified on the "after" images. The mean difference between the right and left mandibular angles before mandibular surgery was $7^{\circ}$, whereas after mandibular surgery it was $1.5^{\circ}$. The right and left mandibular length ratios subtracted from 1 was 0.114 before mandibular surgery, while it was 0.036 after mandibular surgery. The differences were analyzed using the nonparametric test and the Wilcoxon signed ranks test (p<0.05). Conclusion: The system that has been developed produces an accurate three-dimensional representation of the skull, upon which individualized surgery of the skull and jaws is easily performed. The system also permits accurate measurement and monitoring of postsurgical changes to the face and jaws through reproducible and noninvasive means.

  • PDF

DCT 계수를 이용한 얼굴 특징 영역의 검출 (Detection of Facial Feature Regionsby Manipulation of DCT's Coefficients)

  • 이부형;류장렬
    • 한국산학기술학회논문지
    • /
    • 제8권2호
    • /
    • pp.267-272
    • /
    • 2007
  • 본 논문에서는 DCT계수의 특성을 이용하여 조명조건이나 얼굴의 크기에 무관하게 얼굴특징영역을 검출하기 위한 새로운 방법을 제안한다. 일반적으로 영상을 DCT변환하면 영상의 에너지가 저주파영역에 집중되는 특성을 가지나 얼굴 특징요소들은 얼굴영상에서 비교적 고주파 성분들을 포함하고 있기 때문에 저주파에 해당되는 DCT계수들의 일부를 제거한 후 역변환을 취하면 얼굴특징영역이 강조된 영상을 얻을 수 있다. 따라서, 본 논문에서는 DCT변환된 영상으로부터 저주파 계수의 일부를 제거하여 얼굴특징요소 후보들을 추출한 후 템플릿을 적용하여 얼굴특징요소 영역을 결정한다. 얼굴특징요소 영역이 결정되면 얼굴특징요소 추출 알고리즘을 적용하여 눈. 코, 입을 구별한다. 제안된 알고리즘을 MIT의CBCL DB와 Yale facedatabase B 에 적용하여 실험하였다. 실험결과 DCT변환된 영상에서 저주파 일부의 계수를 제거한 후 얼굴 특징영역을 검출했을 경우 그렇지 않은 영상보다 영상의 크기와 조명조건의 변화에 무관하게 인식성능이 향상됨을 알 수 있었다.

  • PDF