• Title/Summary/Keyword: 3D face

Search Result 911, Processing Time 0.028 seconds

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.783-788
    • /
    • 2002
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face Image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives md vowels.

Quartz Concentration and Respirable Dust of Coal Mines in Taeback and Kangneung Areas (태백 및 강릉지역 석탄광의 호흡성 분진과 석영농도에 관한 조사)

  • Choi, Ho-Chun;Cheon, Yong-Hee;Yoon, Young-No;Kim, Hae-Jeong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.20 no.2 s.22
    • /
    • pp.261-269
    • /
    • 1987
  • In order to investigate working conditions of underground coal mines, this work was undertaken to evaluate the respirable dust and the concentration of quartz in Taeback and Kangneung areas. The concentration of quartz was determined by Fourier Transform Infrared Spectrophotometry. The results were as follows; 1) The concentration of respirable dust of drilling and coal face in Taeback and Kangneung areas were as followed; Arithmetic $Mean{\pm}S.D.(mg/m^3)$ Taeback Drilling: $2.00{\pm}1.56$ Taeback Coal Face: $3.74{\pm}3.14$ Kangneung Drilling: $4.55{\pm}4.51$ Kangneung Coal Face: $5.77{\pm}4.53$ Geometric $Mean{\pm}S.D.(mg/m^3)$ Taeback Drilling: $1.34{\pm}2.81$ Taeback Coal Face : $2.55{\pm}2.61$ Kangneung Drilling : $2.44{\pm}3.63$ Kangneung Coal Face: $4.24{\pm}2.37$ 2) Distribution of respirable dust was well fitted to the log-normal distribution and geometric mean value was $log^{-1}\;0.37{\pm}log^{-1}\;0.47(2.34{\pm}2.95)mg/m^3$. 3) The difference of respirable dust concentrations in Taeback and Kangneung areas was not significant statistically (p>0.05). 4) The concentration of quartz of drilling and coal face in Taeback and Kangneung areas were as followed; Arithmetic $Mean{\pm}S.D.(%)$ Taeback Drilling: $6.18{\pm}5.52$ Taeback Coal Face: $1.89{\pm}1.54$ Kangneung Drilling: $3.54{\pm}2.12$ Kangneung Coal Face: $2.05{\pm}3.37$ Geometric $Mean{\pm}S.D.(%)$ Taeback Drilling: $4.24{\pm}2.59$ Coal Face: $1.39{\pm}2.22$ Kangneung Drilling : $2.55{\pm}3.08$ Kangneung Coal Face : $1.24{\pm}2.33$ 5) Distribution of quartz concentrations was well fitted to the log-normal distribution and geometric mean value was $log^{-1}\;0.33{\pm}log^{-1}\;0.45(2.14{\pm}2.82)%$. 6) The difference of quartz concentrations in Taeback and Kangneung areas was not significant (p>0.05), but significant at drilling sites and coal faces (p<0.05).

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 김동수;남기환;한준희;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.181-185
    • /
    • 1998
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels.

  • PDF

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

Face Pose Estimation using Stereo Image (스테레오 영상을 이용한 얼굴 포즈 추정)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.151-159
    • /
    • 2006
  • In this paper. we Present an estimation method of a face pose by using two camera images. First, it finds corresponding facial feature points of eyebrow, eye and lip from two images After that, it computes three dimensional location of the facial feature points by using the triangulation method of stereo vision techniques. Next. it makes a triangle by using the extracted facial feature points and computes the surface normal vector of the triangle. The surface normal of the triangle represents the direction of the face. We applied the computed face pose to display a 3D face model. The experimental results show that the proposed method extracts correct face pose.

  • PDF

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Evaluation of Histograms Local Features and Dimensionality Reduction for 3D Face Verification

  • Ammar, Chouchane;Mebarka, Belahcene;Abdelmalik, Ouamane;Salah, Bourennane
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.468-488
    • /
    • 2016
  • The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four-Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.

3D Augmented Reality Streaming System Based on a Lamina Display

  • Baek, Hogil;Park, Jinwoo;Kim, Youngrok;Park, Sungwoong;Choi, Hee-Jin;Min, Sung-Wook
    • Current Optics and Photonics
    • /
    • v.5 no.1
    • /
    • pp.32-39
    • /
    • 2021
  • We propose a three-dimensional (3D) streaming system based on a lamina display that can convey field information in real-time by creating floating 3D images that can satisfy the accommodation cue. The proposed system is mainly composed of three parts, namely: a 3D vision camera unit to obtain and provide RGB and depth data in real-time, a 3D image engine unit to realize the 3D volume with a fast response time by using the RGB and depth data, and an optical floating unit to bring the implemented 3D image out of the system and consequently increase the sense of presence. Furthermore, we devise the streaming method required for implementing augmented reality (AR) images by using a multilayered image, and the proposed method for implementing AR 3D video in real-time non-face-to-face communication has been experimentally verified.

VALIDITY OF SUPERIMPOSITION RANGE AT 3-DIMENSIONAL FACIAL IMAGES (안면 입체영상 중첩시 중첩 기준 범위 설정에 따른 적합도 차이)

  • Choi, Hak-Hee;Cho, Jin-Hyoung;Park, Hong-Ju;Oh, Hee-Kyun;Choi, Jin-Hugh;Hwang, Hyeon-Shik;Lee, Ki-Heon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.31 no.2
    • /
    • pp.149-157
    • /
    • 2009
  • Purpose: This study was to evaluate the validity of superimposition range at facial images constructed with 3-dimensional (3D) surface laser scanning system. Materials and methods: For the present study, thirty adults, who had no severe skeletal discrepancy, were selected and scanned twice by a 3D laser scanner (VIVID 910, Minolta, Tokyo, Japan) with 12 markers placed on the face. Then, two 3D facial images (T1-baseline, T2-30 minutes later) were reconstructed respectably and superimposed in several manners with $RapidForm^{TM}2006$ (Inus, Seoul, Korea) software program. The distances between markers at the same place of face were measured in superimposed 3D facial images and measurement were done all the 12 makers respectably. Results: The average linear distances between the markers at the same place in the superimposed image constructed by upper 2/3 of the face was $0.92{\pm}0.23\;mm$, in the superimposed image constructed by upper 1/2 of the face was $0.98{\pm}0.26\;mm$, in the superimposed image constructed by upper 1/3 of the face and nose area was $0.99{\pm}0.24\;mm$, in the superimposed image constructed by upper 1/3 of the face was $1.41{\pm}0.48\;mm$, and in the superimposed image constructed by whole face was $0.83{\pm}0.13\;mm$. There were no statistically significant differences in the liner distances of the makers placed on the area included in superimposition range used for partial registration methods but there were significant differences in the linear distances of the markers placed on the areas not included in superimposition range between whole registration method and partial registration methods used in this study. Conclusion: The results of the present study suggest that the validity of superimposition is decreased as superimposition range is reduced in the superimposition of 3D images constructed with 3D laser scanner for the same subject.