• 제목/요약/키워드: Facial image

검색결과 834건 처리시간 0.036초

Facial Image Recognition Based on Wavelet Transform and Neural Networks (웨이브렛 변환과 신경망 기반 얼굴 인식)

  • 임춘환;이상훈;편석범
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • 제37권3호
    • /
    • pp.104-113
    • /
    • 2000
  • In this study, we propose facial image recognition based on wavelet transform and neural network. This algorithm is proposed by following processes. First, two gray level images is captured in constant illumination and, after removing input image noise using a gaussian filter, differential image is obtained between background and face input image, and this image has a process of erosion and dilation. Second, a mask is made from dilation image and background and facial image is divided by projecting the mask into face input image Then, characteristic area of square shape that consists of eyes, a nose, a mouth, eyebrows and cheeks is detected by searching the edge of divided face image. Finally, after characteristic vectors are extracted from performing discrete wavelet transform(DWT) of this characteristic area and is normalized, normalized vectors become neural network input vectors. And recognition processing is performed based on neural network learning. Simulation results show recognition rate of 100 % about learned image and 92% about unlearned image.

  • PDF

Clinical predictive diagnostic study on prognosis of Bell's palsy with the Digital Infrared Thermal Image (적외선 체열진단법을 이용한 Bell's palsy의 임상적 예후 진단 연구)

  • Song, Beom-Yong
    • Journal of Acupuncture Research
    • /
    • 제18권1호
    • /
    • pp.1-13
    • /
    • 2001
  • The Background and Purpose : Most diagnostic method for the facial palsy were invasive and complex. And we don't know very well prognosis for the recovery of facial palsy in the first stage after the onset. But the Digital Infrared Thermal Image(DITI) isn't invasive and complex diagnostic method for the facial palsy. So we should study on the clinical prognostic diagnosis of Bell's palsy among facial palsy with the DITI. Objective and Methods : This study researched into the clinical statistics for 89 case who are in Bell's palsy, and they are treated with oriental medical care at the Woosuk university during 2 years form November 1998 to October 2000. All objectives have the Grade 6(Zero state) of Bell's palsy in first week after the onset. It takes a patient's facial temperature after the onset. Group A is taken from 1 day to 4 days after the onset. Group B is taken from 5 day to 8 days after the onset. And group C is taken from 9 day to 12 days after the onset. Results and Conclusions : The Digital Infrared thermal image technique showed the more high temperature, the more rapid cure and short treatment period on TE23, B2, S3, S6 in abnormal site of Bell's palsy. But it showed the more low temperature, the more rapid cure and short treatment period on TE17 of abnormal site of Bell's palsy. As a conclusion, we could think that the prognostic diagnosis of Bell's palsy closely related with the thermal difference normal and abnormal site of Bell's palsy that were took picture after the onset.

  • PDF

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • 제11권2호
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

Content-based Face Retrieval System using Wavelet and Neural Network (Wavelet과 신경망을 이용한 내용기반 얼굴 검색 시스템)

  • 강영미;정성환
    • Journal of the Korea Computer Industry Society
    • /
    • 제2권3호
    • /
    • pp.265-274
    • /
    • 2001
  • In this paper, we propose a content-based face retrieval system which can retrieve a face based on a facial feature region. Instead of using keyword such as a resident registration number or name for a query, the our system uses a facial image as a visual query. That is, we recognize a face based on a specific feature region including eyes, nose, and mouth. For this, we extract the feature region using the color information based on HSI color model and the edge information from wavelet transformed image, and then recognize the feature region using neural network. The proposed system is implemented on client/server environment based on Oracle DBMS for a large facial image database. In the experiment with 150 various facial images, the proposed method showed about 88.3% recognition rate.

  • PDF

A study on the color quantization for facial images using skin-color mask (살색 검출 mask를 이용한 사진영상의 컬러 양자화에 대한 연구)

  • Lee, Min-Cheol;Lee, Jong-Deok;Huh, Myung-Sun;Moon, Chan-Woo;Ahn, Hyun-Sik;Jeong, Gu-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제8권1호
    • /
    • pp.25-30
    • /
    • 2008
  • In this paper, we propose a color quantization method regarding facial image for mobile services. For facial images, skin colors are more emphasized. First, we extract skin-color mask in the image and divide the image into two regions. Next, we extract color pallette for two regions respectively. In the proposed method, the loss in the face region is minimized and it can be useful for mobile services considering facial images. From the 8-bit color quantization experiment, we show that the proposed method works well.

  • PDF

Analysis of Facial Asymmetry

  • Choi, Kang Young
    • Archives of Craniofacial Surgery
    • /
    • 제16권1호
    • /
    • pp.1-10
    • /
    • 2015
  • Facial symmetry is an important component of attractiveness. However, functional symmetry is favorable to aesthetic symmetry. In addition, fluctuating asymmetry is more natural and common, even if patients find such asymmetry to be noticeable. However, fluctuating asymmetry remains difficult to define. Several studies have shown that a certain level of asymmetry could generate an unfavorable image. A natural profile is favorable to perfect mirror-image profile, and images with canting and differences less than $3^{\circ}-4^{\circ}$ and 3-4 mm, respectively, are generally not recognized as asymmetry. In this study, a questionnaire survey among 434 medical students was used to evaluate photos of Asian women. The students preferred original images over mirror images. Facial asymmetry was noticed when the canting and difference were more than $3^{\circ}$ and 3 mm, respectively. When a certain level of asymmetry is recognizable, correcting it can help to improve social life and human relationships. Prior to any operation, the anatomical component for noticeable asymmetry should be understood, which can be divided into hard tissues and soft tissue. For diagnosis, two-and three-dimensional (3D) photogrammetry and radiometry are used, including photography, laser scanner, cephalometry, and 3D computed tomography.

Facial Expression Recognition through Self-supervised Learning for Predicting Face Image Sequence

  • Yoon, Yeo-Chan;Kim, Soo Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • 제27권9호
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose a new and simple self-supervised learning method that predicts the middle image of a face image sequence for automatic expression recognition. Automatic facial expression recognition can achieve high performance through deep learning methods, however, generally requires a expensive large data set. The size of the data set and the performance of the algorithm are tend to be proportional. The proposed method learns latent deep representation of a face through self-supervised learning using an existing dataset without constructing an additional dataset. Then it transfers the learned parameter to new facial expression reorganization model for improving the performance of automatic expression recognition. The proposed method showed high performance improvement for two datasets, CK+ and AFEW 8.0, and showed that the proposed method can achieve a great effect.

A Study on the Individual Authentication Using Facial Information For Online Lecture (가상강의에 적용을 위한 얼굴영상정보를 이용한 개인 인증 방법에 관한 연구)

  • 김동현;권중장
    • Proceedings of the IEEK Conference
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(3)
    • /
    • pp.117-120
    • /
    • 2000
  • In this paper, we suggest an authentication system for online lecture using facial information and a face recognition algorithm base on relation of face. First, a facial area on complex background is detected using color information. Second, features are extracted with edge profile. Third, compare it with the value of original facial image in database. By experiments, we know that the proposed system is an useful method for online lecture authentication system.

  • PDF

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • 제48권1호
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

Three-dimensional morphometric analysis of facial units in virtual smiling facial images with different smile expressions

  • Hang-Nga Mai;Thaw Thaw Win;Minh Son Tong;Cheong-Hee Lee;Kyu-Bok Lee;So-Yeun Kim;Hyun-Woo Lee;Du-Hyeong Lee
    • The Journal of Advanced Prosthodontics
    • /
    • 제15권1호
    • /
    • pp.1-10
    • /
    • 2023
  • PURPOSE. Accuracy of image matching between resting and smiling facial models is affected by the stability of the reference surfaces. This study aimed to investigate the morphometric variations in subdivided facial units during resting, posed and spontaneous smiling. MATERIALS AND METHODS. The posed and spontaneous smiling faces of 33 adults were digitized and registered to the resting faces. The morphological changes of subdivided facial units at the forehead (upper and lower central, upper and lower lateral, and temple), nasal (dorsum, tip, lateral wall, and alar lobules), and chin (central and lateral) regions were assessed by measuring the 3D mesh deviations between the smiling and resting facial models. The one-way analysis of variance, Duncan post hoc tests, and Student's t-test were used to determine the differences among the groups (α = .05). RESULTS. The smallest morphometric changes were observed at the upper and central forehead and nasal dorsum; meanwhile, the largest deviation was found at the nasal alar lobules in both the posed and spontaneous smiles (P < .001). The spontaneous smile generally resulted in larger facial unit changes than the posed smile, and significant difference was observed at the alar lobules, central chin, and lateral chin units (P < .001). CONCLUSION. The upper and central forehead and nasal dorsum are reliable areas for image matching between resting and smiling 3D facial images. The central chin area can be considered an additional reference area for posed smiles; however, special cautions should be taken when selecting this area as references for spontaneous smiles.