• 제목/요약/키워드: Facial image

검색결과 834건 처리시간 0.035초

가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현 (Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality)

  • 최장석;이기영
    • 전자공학회논문지S
    • /
    • 제35S권7호
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF

동영상에서 얼굴의 주색상 밝기 분포를 이용한 실시간 얼굴영역 검출기법 (Using Analysis of Major Color Component facial region detection algorithm for real-time image)

  • 최미영;김계영;최형일
    • 디지털콘텐츠학회 논문지
    • /
    • 제8권3호
    • /
    • pp.329-339
    • /
    • 2007
  • 본 논문은 연속적으로 입력되는 동영상에서 시공간 정보를 이용하여 다양한 조명환경에서도 실시간 적용이 가능한 얼굴영역 검출기법을 제안한다. 제안한 알고리즘은 연속된 두개의 연속 영상에서 에지 차영상을 구하고 연속적으로 입력되는 영상과의 차분 누적영상을 통해 초기 얼굴영역을 검출한다. 초기 얼굴영역으로부터 외부 조명의 영향을 없애기 위해, 검출된 초기 얼굴영역의 수평 프로파일을 이용하여 수직 방향으로 객체영역을 이분하며, 각각의 객체영역에 관해 주색상 밝기를 구한다. 배경과 잡음 성분을 제거한 후, 분할된 얼굴영역을 통합한 주색상 밝기 분포를 이용하여 타원으로 근사화 함으로써 정확한 얼굴의 기울기와 영역을 실시간으로 계산한다. 제안된 방법은 다양한 조명조건에서 얻어진 동영상을 이용하여 실험되었으며 얼굴의 좌 우 기울기가 $30^{\circ}$이하에서 우수한 얼굴영역 검출 성능을 보였다.

  • PDF

거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링 (Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling)

  • 경규민;박민용;현창호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적 (A Gaze Tracking based on the Head Pose in Computer Monitor)

  • 오승환;이희영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

KPCA 기반 노이즈 제거 기법을 이용한 부분 손상된 얼굴 영상의 복원 (Reconstruction of Partially Occluded Facial Image Utilizing KPCA-based Denoising Method)

  • 강대성;김종호;박주영
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2005년도 춘계학술대회 학술발표 논문집 제15권 제1호
    • /
    • pp.247-250
    • /
    • 2005
  • 많은 경우, 부분 손상된 얼굴 영상을 복원해야 할 필요가 있다. 대표적인 예로는 감시 카메라에 찍힌 범인의 얼굴 영상이 이에 속한다. 이런 경우 얼굴의 중요한 부분이 가려져 있기 때문에 자동 얼굴 인식 시스템이나 사람의 관찰로는 그 부분을 인식하기는 매우 어렵다. 이 논문에서는 그 어려움을 극복하기 위해 Kernel PCA 기반 노이즈 제거 기법을 부분 손상된 얼굴 영상에 적용한 문제를 고려해 보았다.

  • PDF

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • 제3권2호
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

한 장의 포토기반 실사 수준 얼굴 애니메이션 (A photo-based realistic facial animation)

  • 김재환;정일권
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2011년도 춘계 종합학술대회 논문집
    • /
    • pp.51-52
    • /
    • 2011
  • We introduce a novel complete framework for contructing realistic facial animations given just one facial photo as an input in this paper. Our approach is carried on in 2D photo spacem not 3D space. Moreover, we utilize computer vision-based technique (digital matting) as well as conventional image processing methods (image warping and texture synthesis) for expressing more realistic facial animations. Simulated results show that our scheme produces high quality facial animations very efficiently.

  • PDF

퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템 (Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing)

  • 김대진;김종성;변증남
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

안면 비대칭 환자의 하악골 수술 후 하악골 변화에 대한 3차원 CT 영상 비교 (Comparision of Mandible Changes on Three-Dimensional Computed Tomography image After Mandibular Surgery in Facial Asymmetry Patients)

  • 김미령;진병로
    • Journal of Yeungnam Medical Science
    • /
    • 제25권2호
    • /
    • pp.108-116
    • /
    • 2008
  • Background : When surgeons plan mandible ortho surgery for patients with skeletal class III facial asymmetry, they must be consider the exact method of surgery for correction of the facial asymmetry. Three-dimensional (3D) CT imaging is efficient in depicting specific structures in the craniofacial area. It reproduces actual measurements by minimizing errors from patient movement and allows for image magnification. Due to the rapid development of digital image technology and the expansion of treatment range, rapid progress has been made in the study of three-dimensional facial skeleton analysis. The purpose of this study was to conduct 3D CT image comparisons of mandible changes after mandibular surgery in facial asymmetry patients. Materials & methods : This study included 7 patients who underwent 3D CT before and after correction of facial asymmetry in the oral and maxillofacial surgery department of Yeungnam University Hospital between August 2002 and November 2005. Patients included 2 males and 5 females, with ages ranging from 16 years to 30 years (average 21.4 years). Frontal CT images were obtained before and after surgery, and changes in mandible angle and length were measured. Results : When we compared the measurements obtained before and after mandibular surgery in facial asymmetry patients, correction of facial asymmetry was identified on the "after" images. The mean difference between the right and left mandibular angles before mandibular surgery was $7^{\circ}$, whereas after mandibular surgery it was $1.5^{\circ}$. The right and left mandibular length ratios subtracted from 1 was 0.114 before mandibular surgery, while it was 0.036 after mandibular surgery. The differences were analyzed using the nonparametric test and the Wilcoxon signed ranks test (p<0.05). Conclusion: The system that has been developed produces an accurate three-dimensional representation of the skull, upon which individualized surgery of the skull and jaws is easily performed. The system also permits accurate measurement and monitoring of postsurgical changes to the face and jaws through reproducible and noninvasive means.

  • PDF