• Title/Summary/Keyword: Face image

Search Result 1,609, Processing Time 0.032 seconds

Implementation of Nose and Face Detections in Depth Image

  • Kim, Heung-jun;Lee, Dong-seok;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • v.4 no.1
    • /
    • pp.43-50
    • /
    • 2017
  • In this paper, we propose a method which detects the nose and face of certain human by using the depth image. The proposed method has advantages of the low computational complexity and the high accuracy even in dark environment. Also, the detection accuracy of nose and face does not change in various postures. The proposed method first locates the locally protruding part from the depth image of the human body captured through the depth camera, and then confirms the nose through the depth characteristic of the nose and surrounding pixels. After finding the correct pixel of the nose, we determine the region of interest centered on the nose. In this case, the size of the region of interest is variable depending on the depth value of the nose. Then, face region can be found by performing binarization using the depth histogram in the region of interest. The proposed method can detect the nose and the face accurately regardless of the pose or the illumination of the captured area.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

A Study on Face Recognition on an UMPC (UMPC 환경에서의 얼굴인식 연구)

  • Nam, Gi-Pyo;Kang, Byung-Jun;Jeong, Dae-Sik;Park, Kang-Ryoung
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.831-832
    • /
    • 2008
  • This paper proposes the experimental results and analysis of face recognition on an conventional UMPC(Ultra Mobile Personal Computer). With face images acquired by the embedded camera of UMPC, we detected the facial region by using Adaboost face detector. The detected image was normalized into a $32{\times}32$ pixel sized image for face recognition. We performed face recognition based on PCA (Principal Component Analysis). As experimental results, the TER (Total Error Rate) of face recognition was 19.77%.

  • PDF

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Real-Time Rotation-Invariant Face Detection Using Combined Depth Estimation and Ellipse Fitting

  • Kim, Daehee;Lee, Seungwon;Kim, Dongmin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.73-77
    • /
    • 2012
  • This paper reports a combined depth- and model-based face detection and tracking approach. The proposed algorithm consists of four functional modules; i) color-based candidate region extraction, ii) generation of the depth histogram for handling occlusion, iii) rotation-invariant face region detection using ellipse fitting, and iv) face tracking based on motion prediction. This technique solved the occlusion problem under complicated environment by detecting the face candidate region based on the depth-based histogram and skin colors. The angle of rotation was estimated by the ellipse fitting method in the detected candidate regions. The face region was finally determined by inversely rotating the candidate regions by the estimated angle using Haar-like features that were robustly trained robustly by the frontal face.

  • PDF

Design of an observer-based decentralized fuzzy controller for discrete-time interconnected fuzzy systems (얼굴영상과 예측한 열 적외선 텍스처의 융합에 의한 얼굴 인식)

  • Kong, Seong G.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.437-443
    • /
    • 2015
  • This paper presents face recognition based on the fusion of visible image and thermal infrared (IR) texture estimated from the face image in the visible spectrum. The proposed face recognition scheme uses a multi- layer neural network to estimate thermal texture from visible imagery. In the training process, a set of visible and thermal IR image pairs are used to determine the parameters of the neural network to learn a complex mapping from a visible image to its thermal texture in the low-dimensional feature space. The trained neural network estimates the principal components of the thermal texture corresponding to the input visible image. Extensive experiments on face recognition were performed using two popular face recognition algorithms, Eigenfaces and Fisherfaces for NIST/Equinox database for benchmarking. The fusion of visible image and thermal IR texture demonstrated improved face recognition accuracies over conventional face recognition in terms of receiver operating characteristics (ROC) as well as first matching performances.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

A Study On Face Feature Points Using Active Discrete Wavelet Transform (Active Discrete Wavelet Transform를 이용한 얼굴 특징 점 추출)

  • Chun, Soon-Yong;Zijing, Qian;Ji, Un-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.1
    • /
    • pp.7-16
    • /
    • 2010
  • Face recognition of face images is an active subject in the area of computer pattern recognition, which has a wide range of potential. Automatic extraction of face image of the feature points is an important step during automatic face recognition. Whether correctly extract the facial feature has a direct influence to the face recognition. In this paper, a new method of facial feature extraction based on Discrete Wavelet Transform is proposed. Firstly, get the face image by using PC Camera. Secondly, decompose the face image using discrete wavelet transform. Finally, we use the horizontal direction, vertical direction projection method to extract the features of human face. According to the results of the features of human face, we can achieve face recognition. The result show that this method could extract feature points of human face quickly and accurately. This system not only can detect the face feature points with great accuracy, but also more robust than the tradition method to locate facial feature image.

Facial Image Synthesis by Controlling Skin Microelements (피부 미세요소 조절을 통한 얼굴 영상 합성)

  • Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.369-377
    • /
    • 2022
  • Recent deep learning-based face synthesis research shows the result of generating a realistic face including overall style or elements such as hair, glasses, and makeup. However, previous methods cannot create a face at a very detailed level, such as the microstructure of the skin. In this paper, to overcome this limitation, we propose a technique for synthesizing a more realistic facial image from a single face label image by controlling the types and intensity of skin microelements. The proposed technique uses Pix2PixHD, an Image-to-Image Translation method, to convert a label image showing the facial region and skin elements such as wrinkles, pores, and redness to create a facial image with added microelements. Experimental results show that it is possible to create various realistic face images reflecting fine skin elements corresponding to this by generating various label images with adjusted skin element regions.

Comparison of recognition rate with distance on stereo face images base PCA (PCA기반의 스테레오 얼굴영상에서 거리에 따른 인식률 비교)

  • Park Chang-Han;Namkung Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.9-16
    • /
    • 2005
  • In this paper, we compare face recognition rate by distance change using Principal Component Analysis algorithm being input left and right image in stereo image. Change to YCbCr color space from RGB color space in proposed method and face region does detection. Also, after acquire distance using stereo image extracted face image's extension and reduce do extract robust face region, experimented recognition rate by using PCA algorithm. Could get face recognition rate of 98.61%(30cm), 98.91%(50cm), 99.05%(100cm), 99.90%(120cm), 97.31%(150cm) and 96.71%(200cm) by average recognition result of acquired face image. Therefore, method that is proposed through an experiment showed that can get high recognition rate if apply scale up or reduction according to distance.