• Title/Summary/Keyword: Facial Images

Search Result 637, Processing Time 0.032 seconds

Soft tissue evaluation using 3-dimensional face image after maxillary protraction therapy (3차원 얼굴 영상을 이용한 상악 전방견인 치료 후의 연조직 평가)

  • Choi, Dong-Soon;Lee, Kyoung-Hoon;Jang, Insan;Cha, Bong-Kuen
    • The Journal of the Korean dental association
    • /
    • v.54 no.3
    • /
    • pp.217-229
    • /
    • 2016
  • Purpose: The aim of this study was to evaluate the soft-tissue change after the maxillary protraction therapy using threedimensional (3D) facial images. Materials and Methods: This study used pretreatment (T1) and posttreatment (T2) 3D facial images from thirteen Class III malocclusion patients (6 boys and 7 girls; mean age, $8.9{\pm}2.2years$) who received maxillary protraction therapy. The facial images were taken using the optical scanner (Rexcan III 3D scanner), and T1 and T2 images were superimposed using forehead area as a reference. The soft-tissue changes after the treatment (T2-T1) were three-dimensionally calculated using 15 soft-tissue landmarks and 3 reference planes. Results: Anterior movements of the soft-tissue were observed on the pronasale, subnasale, nasal ala, soft-tissue zygoma, and upper lip area. Posterior movements were observed on the lower lip, soft-tissue B-point, and soft-tissue gnathion area. Vertically, most soft-tissue landmarks moved downward at T2. In transverse direction, bilateral landmarks, i.e. exocanthion, zygomatic point, nasal ala, and cheilion moved more laterally at T2. Conclusion: Facial soft-tissue of Class III malocclusion patients was changed three-dimensionally after maxillary protraction therapy. Especially, the facial profile was improved by forward movement of midface and downward and backward movement of lower face.

  • PDF

Contour Extraction of Facial Features Based on the Enhanced Snake (개선된 스네이크를 이용한 얼굴 특징요소의 윤곽 추출)

  • Lee, Sung Soo;Jang, JongWhan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.8
    • /
    • pp.309-314
    • /
    • 2015
  • One of typical methods for extracting facial features from face images may be snake. Although snake is simple and fast, performance is very much affected by the initial contour and the shape of object to be extracted. In this paper, the enhanced snake is proposed to extract better facial features from 6 lip and mouth images as snake point is added to the midpoint of snake segment. It is shown that RSD of the proposed method is about 2.8% to 5.8% less than that of Greedy snake about 6 test face images. Since lesser RSD is especially obtained for contours with highly concavity, the contour is more accurately extracted.

A study on the color quantization for facial images using skin-color mask (살색 검출 mask를 이용한 사진영상의 컬러 양자화에 대한 연구)

  • Lee, Min-Cheol;Lee, Jong-Deok;Huh, Myung-Sun;Moon, Chan-Woo;Ahn, Hyun-Sik;Jeong, Gu-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.1
    • /
    • pp.25-30
    • /
    • 2008
  • In this paper, we propose a color quantization method regarding facial image for mobile services. For facial images, skin colors are more emphasized. First, we extract skin-color mask in the image and divide the image into two regions. Next, we extract color pallette for two regions respectively. In the proposed method, the loss in the face region is minimized and it can be useful for mobile services considering facial images. From the 8-bit color quantization experiment, we show that the proposed method works well.

  • PDF

Real-Time Automatic Tracking of Facial Feature (얼굴 특징 실시간 자동 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1182-1187
    • /
    • 2004
  • Robust, real-time, fully automatic tracking of facial features is required for many computer vision and graphics applications. In this paper, we describe a fully automatic system that tracks eyes and eyebrows in real time. The pupils are tracked using the red eye effect by an infrared sensitive camera equipped with infrared LEDs. Templates are used to parameterize the facial features. For each new frame, the pupil coordinates are used to extract cropped images of eyes and eyebrows. The template parameters are recovered by PCA analysis on these extracted images using a PCA basis, which was constructed during the training phase with some example images. The system runs at 30 fps and requires no manual initialization or calibration. The system is shown to work well on sequences with considerable head motions and occlusions.

Analysis of Facial Asymmetry

  • Choi, Kang Young
    • Archives of Craniofacial Surgery
    • /
    • v.16 no.1
    • /
    • pp.1-10
    • /
    • 2015
  • Facial symmetry is an important component of attractiveness. However, functional symmetry is favorable to aesthetic symmetry. In addition, fluctuating asymmetry is more natural and common, even if patients find such asymmetry to be noticeable. However, fluctuating asymmetry remains difficult to define. Several studies have shown that a certain level of asymmetry could generate an unfavorable image. A natural profile is favorable to perfect mirror-image profile, and images with canting and differences less than $3^{\circ}-4^{\circ}$ and 3-4 mm, respectively, are generally not recognized as asymmetry. In this study, a questionnaire survey among 434 medical students was used to evaluate photos of Asian women. The students preferred original images over mirror images. Facial asymmetry was noticed when the canting and difference were more than $3^{\circ}$ and 3 mm, respectively. When a certain level of asymmetry is recognizable, correcting it can help to improve social life and human relationships. Prior to any operation, the anatomical component for noticeable asymmetry should be understood, which can be divided into hard tissues and soft tissue. For diagnosis, two-and three-dimensional (3D) photogrammetry and radiometry are used, including photography, laser scanner, cephalometry, and 3D computed tomography.

Conflict Resolution: Analysis of the Existing Theories and Resolution Strategies in Relation to Face Recognition

  • A. A. Alabi;B. S. Afolabi;B. I. Akhigbe;A. A. Ayoade
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.166-176
    • /
    • 2023
  • A scenario known as conflict in face recognition may arise as a result of some disparity-related issues (such as expression, distortion, occlusion and others) leading to a compromise of someone's identity or contradiction of the intended message. However, addressing this requires the determination and application of appropriate procedures among the various conflict theories both in terms of concepts as well as resolution strategies. Theories such as Marxist, Game theory (Prisoner's dilemma, Penny matching, Chicken problem), Lanchester theory and Information theory were analyzed in relation to facial images conflict and these were made possible by trying to provide answers to selected questions as far as resolving facial conflict is concerned. It has been observed that the scenarios presented in the Marxist theory agree with the form of resolution expected in the analysis of conflict and its related issues as they relate to face recognition. The study observed that the issue of conflict in facial images can better be analyzed using the concept introduced by the Marxist theory in relation to the Information theory. This is as a result of its resolution strategy which tends to seek a form of balance as result as opposed to the win or lose case scenarios applied in other concepts. This was also consolidated by making reference to the main mechanisms and result scenario applicable in Information theory.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Facial Expression Explorer for Realistic Character Animation

  • Ko, Hee-Dong;Park, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.16.1-164
    • /
    • 1998
  • This paper describes Facial Expression Explorer to search for the components of a facial expression and to map the expression to other expressionless figures like a robot, frog, teapot, rabbit and others. In general, it is a time-consuming and laborious job to create a facial expression manually, especially when the facial expression must personify a well-known public figure or an actor. In order to extract a blending ratio from facial images automatically, the Facial Expression Explorer uses Networked Genetic Algorithm(NGA) which is a fast method for the convergence by GA. The blending ratio is often used to create facial expressions through shape blending methods by animators. With the Facial Expression Explorer a realistic facial expression can be modeled more efficiently.

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.

Gender Classification of Low-Resolution Facial Image Based on Pixel Classifier Boosting

  • Ban, Kyu-Dae;Kim, Jaehong;Yoon, Hosub
    • ETRI Journal
    • /
    • v.38 no.2
    • /
    • pp.347-355
    • /
    • 2016
  • In face examinations, gender classification (GC) is one of several fundamental tasks. Recent literature on GC primarily utilizes datasets containing high-resolution images of faces captured in uncontrolled real-world settings. In contrast, there have been few efforts that focus on utilizing low-resolution images of faces in GC. We propose a GC method based on a pixel classifier boosting with modified census transform features. Experiments are conducted using large datasets, such as Labeled Faces in the Wild and The Images of Groups, and standard protocols of GC communities. Experimental results show that, despite using low-resolution facial images that have a 15-pixel inter-ocular distance, the proposed method records a higher classification rate compared to current state-of-the-art GC algorithms.