• Title/Summary/Keyword: 3차원 얼굴영상

Search Result 172, Processing Time 0.04 seconds

Design of Face Recognition Algorithm based Optimized pRBFNNs Using Three-dimensional Scanner (최적 pRBFNNs 패턴분류기 기반 3차원 스캐너를 이용한 얼굴인식 알고리즘 설계)

  • Ma, Chang-Min;Yoo, Sung-Hoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.748-753
    • /
    • 2012
  • In this paper, Face recognition algorithm is designed based on optimized pRBFNNs pattern classifier using three-dimensional scanner. Generally two-dimensional image-based face recognition system enables us to extract the facial features using gray-level of images. The environmental variation parameters such as natural sunlight, artificial light and face pose lead to the deterioration of the performance of the system. In this paper, the proposed face recognition algorithm is designed by using three-dimensional scanner to overcome the drawback of two-dimensional face recognition system. First face shape is scanned using three-dimensional scanner and then the pose of scanned face is converted to front image through pose compensation process. Secondly, data with face depth is extracted using point signature method. Finally, the recognition performance is confirmed by using the optimized pRBFNNs for solving high-dimensional pattern recognition problems.

Wavelet based Fuzzy Integral System for 3D Face Recognition (퍼지적분을 이용한 웨이블릿 기반의 3차원 얼굴 인식)

  • Lee, Yeung-Hak;Shim, Jae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.616-626
    • /
    • 2008
  • The face shape extracted by the depth values has different appearance as the most important facial feature information and the face images decomposed into frequency subband are signified personal features in detail. In this paper, we develop a method for recognizing the range face images by combining the multiple frequency domains for each depth image and depth fusion using fuzzy integral. For the proposed approach, the first step tries to find the nose tip that has a protrusion shape on the face from the extracted face area. It is used as the reference point to normalize for orientated facial pose and extract multiple areas by the depth threshold values. In the second step, we adopt as features for the authentication problem the wavelet coefficient extracted from some wavelet subband to use feature information. The third step of approach concerns the application of eigenface and Linear Discriminant Analysis (LDA) method to reduce the dimension and classify. In the last step, the aggregation of the individual classifiers using the fuzzy integral is explained for extracted coefficient at each resolution level. In the experimental results, using the depth threshold value 60 (DT60) show the highest recognition rate among the regions, and the depth fusion method achieves 98.6% recognition rate, incase of fuzzy integral.

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

3D Makeup Simulation using Realistic Facial Data (사실적인 얼굴 데이터를 이용한 3차원 메이크업 시뮬레이션)

  • Lee, Sang-Hoon;Kim, Hyeon-Joong;Choi, Soo-Mi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.410-412
    • /
    • 2012
  • 메이크업 시뮬레이션은 입력 장치와 디스플레이를 사용하여 가상의 얼굴에 다양한 화장법을 시험해 볼 수 있는 도구이다. 최근 다양한 환경을 고려한 여러 메이크업 시뮬레이션이 개발되었지만, 대부분의 시스템은 2차원 영상에서 이루어지며 제한된 조건에서의 시뮬레이션 결과만 확인할 수 있다. 본 연구에서는 측정된 피부의 거칠기와 반사도를 적용하고 적용된 반사도를 조절할 수 있는 사실적인 메이크업 시스템을 개발하였다. 개발된 시뮬레이션 방법을 사용시 3차원 스캐너로 획득한 고해상도의 얼굴 데이터 상에서 측정된 반사도를 사용하여 빛을 고려한 메이크업을 시뮬레이션 할 수 있다. 정점 기반 형상표현을 사용하여 3차원 모델의 렌더링 과정을 간단하고 유연하게 표현하였으며, 반사도를 얼굴 부위에 따라 달리 적용하여 보다 사실적인 메이크업 시뮬레이션을 가능하게 하였다. 또한 사용자에게 반사도를 직접 조절 가능하게 함으로서 보다 사실적인 3차원 메이크업을 가능하게 하였다.

Face Detection Using Adaboost and Template Matching of Depth Map based Block Rank Patterns (Adaboost와 깊이 맵 기반의 블록 순위 패턴의 템플릿 매칭을 이용한 얼굴검출)

  • Kim, Young-Gon;Park, Rae-Hong;Mun, Seong-Su
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.437-446
    • /
    • 2012
  • A face detection algorithms using two-dimensional (2-D) intensity or color images have been studied for decades. Recently, with the development of low-cost range sensor, three-dimensional (3-D) information (i.e., depth image that represents the distance between a camera and objects) can be easily used to reliably extract facial features. Most people have a similar pattern of 3-D facial structure. This paper proposes a face detection method using intensity and depth images. At first, adaboost algorithm using intensity image classifies face and nonface candidate regions. Each candidate region is divided into $5{\times}5$ blocks and depth values are averaged in each block. Then, $5{\times}5$ block rank pattern is constructed by sorting block averages of depth values. Finally, candidate regions are classified as face and nonface regions by matching the constructed depth map based block rank patterns and a template pattern that is generated from training data set. For template matching, the $5{\times}5$ template block rank pattern is prior constructed by averaging block ranks using training data set. The proposed algorithm is tested on real images obtained by Kinect range sensor. Experimental results show that the proposed algorithm effectively eliminates most false positives with true positives well preserved.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

3D Face Modeling based on Image Using Watershed Transform (워터쉐드 변환을 이용한 영상기반의 3D 얼굴 모델링)

  • Shin, Hyun-Shil;Lee, Sang-Eun;Jang, Won-Dal;Yun, Tae-Soo;Yang, Hwang-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.535-538
    • /
    • 2003
  • 본 논문에서는 얼굴 영상으로부터 워터쉐드 변환을 이용하여 3차원 얼굴 모델을 구성하는 방법을 제안한다. 워터쉐드 변환으로 분할된 각각의 영역으로부터 얼굴의 특징점들을 추출하고 MPEG-4에서 정의해놓은 FDP(Facial Definition Parameter)를 기반으로 얼굴 메쉬모델을 생성한다. 워터쉐드 변환시 발생하는 영역 기반의 과분할 결과에서 얻어지는 정확한 정보와 MPEG-4의 FDP를 기반으로 한 Candide Model을 이용함으로써 매우 간편하게 3D 얼굴 모델을 생성할 수 있고 영상 압축 및 전송에 매우 효율적으로 이용될 수 있다.

  • PDF

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

3D Face Modeling Using Feature Line Fitting (특징선 정합을 이용한 3차원 얼굴 모델링)

  • 김항기;김황수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.505-507
    • /
    • 2000
  • 본 논문에서는 3차원 머리 모델에 몇 장의 사진으로부터 얻은 텍스쳐를 입혀 실물처럼 보이는 3차원 인물 모델을 얻는 방법을 제시한다. 모델에 사진들을 맞추는 방법으로는 특징선을 정합하는 방법을 사용한다. 모델에는 얼굴의 특징을 나타낼 수 있는 눈/코/입/눈썹 등의 특징선을 지정하였으며 이들을 사진에 정합시킴으로써 모델의 각 부위에 필요한 텍스쳐 영상을 얻는다. 여러 방향에서 본 사진들을 사용함으로써 더욱 정확한 얼굴 모델을 얻을 수 있는데, 이때 모델의 한 면은 여러 장의 사진에서 합성되어야 하는 경우가 생긴다. 이는 각 사진에서 얼굴이 보는 방향과 모델면이 이루는 각을 이용하여 그 사진이 그 면의 텍스쳐에 기여하는 정도를 계산할 수 있다. 이렇게 함으로써 사진을 이용한 저가의 3차원 캡쳐 시스템을 구현할 수 있다.

  • PDF