• Title/Summary/Keyword: 3D face

Search Result 904, Processing Time 0.028 seconds

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Web-based 3D Face Modeling System for Hairline Modification Surgery (헤어라인 교정 시술을 위한 웹기반 얼굴 3D 모델링)

  • Lee, Sang-Wook;Jang, Yoon-Hee;Jeong, Eun-Young
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.11
    • /
    • pp.91-101
    • /
    • 2011
  • This research aims to suggest web-based 3D face modeling system for hairline modification surgery. As public interests in beauty regarding face escalate with era of wide persoanl mobile smart iCT devices, need for medical information system is urgent and increasing demand. This research attempted to build 3D facing modeling library deploying conventional technology and proprietary software available. Implications from the our experiment found that problems and requirement for developing new web based standard. We suggest new system from our experiment and literature review regarding relevant technologies. Main features of our suggested systems is based on studies regarding hair loss treatment such as medical science, beauty studies and information technology. This system processes input images of 2D frontal and profile pictures of face into 3D face modeling with mesh-data. The mesh data is compatible with web standard technology including SVG and Canvas Tag supported natively by HTML5.

3D Human Face Segmentation using Curvature Estimation (Curvature Estimation을 이용한 3차원 사람얼굴 세그멘테이션)

  • Seongdong Kim;Seonga Chin;Moonwon Choo
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.6
    • /
    • pp.985-990
    • /
    • 2003
  • This paper presents the representation and its shape analysis of face by features based on surface curvature estimation and proposed rotation vector of the human face. Curvature-based surface features are well suited to use for experimenting the 3D human face segmentation. Human surfaces are exactly extracted and computed with parameters and rotated by using active surface mesh model. The estimated features were tested and segmented by reconstructing surfaces from the face surface and analytically computing Gaussian (K) and mean (H) curvatures without threshold.

  • PDF

Point Recognition Precision Test of 3D Automatic Face Recognition Apparatus(3D-AFRA) (3차원 안면자동인식기(3D-AFRA)의 안면 표준점 인식 정확도 검증)

  • Seok, Jae-Hwa;Cho, Kyung-Rae;Cho, Yong-Beum;Yoo, Jung-Hee;Kwak, Chang-Kyu;Hwang, Min-U;Kho, Byung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.1
    • /
    • pp.50-59
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Contitutions. Now We are developing 3D Automatic Face Recognition Apparatus to analyse the facial characteristics. This apparatus show us 3D image of man's face and measure facial figure. We should examine accuracy of position recognition in 3D Automatic Face Recognition Apparatus(3D-AFRA). 2. Methods We took a photograph of Face status with Land Mark by using 3D-AFRA. And We scanned Face status by using laser scanner(vivid 700). We analysed error average of distance between Facial Definition Points. We compare the average between using 3D-AFRA and using laser scanner. So We examined the accuracy of position recognition in 3D-AFRA at indirectly. 3. Results and Conclusions The error average of distance between Right Pupil and The Other Facial Definition Points is 0.5140mm and the error average of distance between Left Pupil and The Other Facial Definition Points is 0.5949mm in frontal image of face. The error average of distance between Left Pupil and The Other Facial Definition Points is 0.5308mm and the error average of distance between Left Tragion and The Other Facial Definition Points is 0.6529mm in laterall image of face. In conclusion, We assessed that accuracy of position recognition in 3D-AFRA is considerably good.

  • PDF

Facial Feature Extraction using Nasal Masks from 3D Face Image (코 형상 마스크를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.1-7
    • /
    • 2004
  • This paper proposes a new method for facial feature extraction, and the method could be used to normalize face images for 3D face recognition. 3D images are much less sensitive than intensity images at a source of illumination, so it is possible to recognize people individually. But input face images may have variable poses such as rotating, Panning, and tilting. If these variances ire not considered, incorrect features could be extracted. And then, face recognition system result in bad matching. So it is necessary to normalize an input image in size and orientation. It is general to use geometrical facial features such as nose, eyes, and mouth in face image normalization steps. In particular, nose is the most prominent feature in 3D face image. So this paper describes a nose feature extraction method using 3D nasal masks that are similar to real nasal shape.

A Software Error Examination of 3D Automatic Face Recognition Apparatus(3D-AFRA) : Measurement of Facial Figure Data (3차원 안면자동인식기(3D-AFRA)의 Software 정밀도 검사 : 형상측정프로그램 오차분석)

  • Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Jun-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Constitutions. We are developing 3D Automatic Face Recognition Apparatus(3D-AFRA) to analyse the facial characteristics. This apparatus show us 3D image and data of man's face and measure facial figure data. So We should examine the Measurement of Facial Figure data error of 3D Automatic Face Recognition Apparatus(3D-AFRA) in Software Error Analysis. 2. Methods We scanned face status by using 3D Automatic Face Recognition Apparatus(3D-AFRA). And we measured lengths Between Facial Definition Parameters of facial figure data by Facial Measurement program. 2.1 Repeatability test We measured lengths Between Facial Definition Parameters of facial figure data restored by 3D-AFRA by Facial Measurement program 10 times. Then we compared 10 results each other for repeatability test. 2.2 Measurement error test We measured lengths Between Facial Definition Parameters of facial figure data by two different measurement program that are Facial Measurement program and Rapidform2006. At measuring lengths Between Facial Definition Parameters, we uses two measurement way. The one is straight line measurement, the other is curved line measurement. Then we compared results measured by Facial Measurement program with results measured by Rapidform2006. 3. Results and Conclusions In repeatability test, standard deviation of results is 0.084-0.450mm. And in straight line measurement error test, the average error 0.0582mm, and the maximum error was 0.28mm. In curved line measurement error test, the average error 0.413mm, and the maximum error was 1.53mm. In conclusion, we assessed that the accuracy and repeatability of Facial Measurement program is considerably good. From now on we complement accuracy of 3D-AFRA in Hardware and Software.

  • PDF

A 3D Face Modeling Method Using Region Segmentation and Multiple light beams (지역 분할과 다중 라이트 빔을 이용한 3차원 얼굴 형상 모델링 기법)

  • Lee, Yo-Han;Cho, Joo-Hyun;Song, Tai-Kyong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.70-81
    • /
    • 2001
  • This paper presents a 3D face modeling method using a CCD camera and a projector (LCD projector or Slide projector). The camera faces the human face and the projector casts white stripe patterns on the human face. The 3D shape of the face is extracted from spatial and temporal locations of the white stripe patterns on a series of image frames. The proposed method employs region segmentation and multi-beam techniques for efficient 3D modeling of hair region and faster 3D scanning respectively. In the proposed method, each image is segmented into face, hair, and shadow regions, which are independently processed to obtain the optimum results for each region. The multi-beam method, which uses a number of equally spaced stripe patterns, reduces the total number of image frames and consequently the overall data acquisition time. Light beam calibration is adopted for efficient light plane measurement, which is not influenced by the direction (vertical or horizontal) of the stripe patterns. Experimental results show that the proposed method provides a favorable 3D face modeling results, including the hair region.

  • PDF

LVQ network for a face image recognition of the 3D (3D 얼굴 영상 인식을 위한 LVQ 네트워크)

  • 김영렬;박진성;임성진;이용구;엄기환
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.151-154
    • /
    • 2003
  • In this paper, we propose a method to recognize a face image of the 3D using the LVQ network. LVQ network of the proposed method, We used the front view of a face image to get to a coded light to a training data, can group a face image including the side of various angle. For an usefulness authentication of this algorithm, Various experiment which classifies a face image of the angle was the low.

  • PDF

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification (비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.