• Title/Summary/Keyword: Facial components

Search Result 133, Processing Time 0.025 seconds

ID Face Detection Robust to Color Degradation and Partial Veiling (색열화 및 부분 은폐에 강인한 ID얼굴 검지)

  • Kim Dae Sung;Kim Nam Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.1-12
    • /
    • 2004
  • In this paper, we present an identificable face (n face) detection method robust to color degradation and partial veiling. This method is composed of three parts: segmentation of face candidate regions, extraction of face candidate windows, and decision of veiling. In the segmentation of face candidate regions, face candidate regions are detected by finding skin color regions and facial components such as eyes, a nose and a mouth, which may have degraded colors, from an input image. In the extraction of face candidate windows, face candidate windows which have high potentials of faces are extracted in face candidate regions. In the decision of veiling, using an eigenface method, a face candidate window whose similarity with eigenfaces is maximum is determined and whether facial components of the face candidate window are veiled or not is determined in the similar way. Experimental results show that the proposed method yields better the detection rate by about $11.4\%$ in test DB containing color-degraded faces and veiled ones than a conventional method without considering color degradation and partial veiling.

Rotated Face Detection Using Polar Coordinate Transform and AdaBoost (극좌표계 변환과 AdaBoost를 이용한 회전 얼굴 검출)

  • Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.896-902
    • /
    • 2021
  • Rotated face detection is required in many applications but still remains as a challenging task, due to the large variations of face appearances. In this paper, a polar coordinate transform that is not affected by rotation is proposed. In addition, a method for effectively detecting rotated faces using the transformed image has been proposed. The proposed polar coordinate transform maintains spatial information between facial components such as eyes, mouth, etc., since the positions of facial components are always maintained regardless of rotation angle, thereby eliminating rotation effects. Polar coordinate transformed images are trained using AdaBoost, which is used for frontal face detection, and rotated faces are detected. We validate the detected faces using LBP that trained the non-face images. Experiments on 3600 face images obtained by rotating images in the BioID database show a rotating face detection rate of 96.17%. Furthermore, we accurately detected rotated faces in images with a background containing multiple rotated faces.

Differences in the heritability of craniofacial skeletal and dental characteristics between twin pairs with skeletal Class I and II malocclusions

  • Park, Heon-Mook;Kim, Pil-Jong;Sung, Joohon;Song, Yun-Mi;Kim, Hong-Gee;Kim, Young Ho;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.51 no.6
    • /
    • pp.407-418
    • /
    • 2021
  • Objective: To investigate differences in the heritability of skeletodental characteristics between twin pairs with skeletal Class I and Class II malocclusions. Methods: Forty Korean adult twin pairs were divided into Class I (C-I) group (0° ≤ angle between point A, nasion, and point B [ANB]) ≤ 4°; mean age, 40.7 years) and Class II (C-II) group (ANB > 4°; mean age, 43.0 years). Each group comprised 14 monozygotic and 6 dizygotic twin pairs. Thirty-three cephalometric variables were measured using lateral cephalograms and were categorized as the anteroposterior, vertical, dental, mandible, and cranial base characteristics. The ACE model was used to calculate heritability (A > 0.7, high heritability). Thereafter, principal component analysis (PCA) was performed. Results: Twin pairs in C-I group exhibited high heritability values in the facial anteroposterior characteristics, inclination of the maxillary and mandibular incisors, mandibular body length, and cranial base angles. Twin pairs in C-II group showed high heritability values in vertical facial height, ramus height, effective mandibular length, and cranial base length. PCA extracted eight components with 88.3% in the C-I group and seven components with 91.0% cumulative explanation in the C-II group. Conclusions: Differences in the heritability of skeletodental characteristics between twin pairs with skeletal Class I and II malocclusions might provide valuable information for growth prediction and treatment planning.

A Study on Extraction of Skin Region and Lip Using Skin Color of Eye Zone (눈 주위의 피부색을 이용한 피부영역검출과 입술검출에 관한 연구)

  • Park, Young-Jae;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.19-30
    • /
    • 2009
  • In this paper, We propose a method with which we can detect facial components and face in input image. We use eye map and mouth map to detect facial components using eyes and mouth. First, We find out eye zone, and second, We find out color value distribution of skin region using the color around the eye zone. Skin region have characteristic distribution in YCbCr color space. By using it, we separate the skin region and background area. We find out the color value distribution of the extracted skin region and extract around the region. Then, detect mouth using mouthmap from extracted skin region. Proposed method is better than traditional method the reason for it comes good result with accurate mouth region.

Optimization of Deep Learning Model Based on Genetic Algorithm for Facial Expression Recognition (얼굴 표정 인식을 위한 유전자 알고리즘 기반 심층학습 모델 최적화)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.85-92
    • /
    • 2020
  • Deep learning shows outstanding performance in image and video analysis, such as object classification, object detection and semantic segmentation. In this paper, it is analyzed that the performances of deep learning models can be affected by characteristics of train dataset. It is proposed as a method for selecting activation function and optimization algorithm of deep learning to classify facial expression. Classification performances are compared and analyzed by applying various algorithms of each component of deep learning model for CK+, MMI, and KDEF datasets. As results of simulation, it is shown that genetic algorithm can be an effective solution for optimizing components of deep learning model.

Soft Sign Language Expression Method of 3D Avatar (3D 아바타의 자연스러운 수화 동작 표현 방법)

  • Oh, Young-Joon;Jang, Hyo-Young;Jung, Jin-Woo;Park, Kwang-Hyun;Kim, Dae-Jin;Bien, Zeung-Nam
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.107-118
    • /
    • 2007
  • This paper proposes a 3D avatar which expresses sign language in a very using lips, facial expression, complexion, pupil motion and body motion as well as hand shape, Hand posture and hand motion to overcome the limitation of conventional sign language avatars from a deaf's viewpoint. To describe motion data of hand and other body components structurally and enhance the performance of databases, we introduce the concept of a hyper sign sentence. We show the superiority of the developed system by a usability test through a questionnaire survey.

Face Detection for Cast Searching in Video (비디오 등장인물 검색을 위한 얼굴검출)

  • Paik Seung-ho;Kim Jun-hwan;Yoo Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.983-991
    • /
    • 2005
  • Human faces are commonly found in a video such as a drama and provide useful information for video content analysis. Therefore, face detection plays an important role in applications such as face recognition, and face image database management. In this paper, we propose a face detection algorithm based on pre-processing of scene change detection for indexing and cast searching in video. The proposed algorithm consists of three stages: scene change detection stage, face region detection stage, and eyes and mouth detection stage. Experimental results show that the proposed algorithm can detect faces successfully over a wide range of facial variations in scale, rotation, pose, and position, and the performance is improved by $24\%$with profile images comparing with conventional methods using color components.

Surgical Treatment of Facial Vascular Malformations (안면부 혈관기형 환자의 수술적 처치)

  • Kim, Soung-Min;Park, Jung-Min;Eo, Mi-Young;Myoung, Hoon;Lee, Jong-Ho;Choi, Jin-Young
    • Korean Journal of Cleft Lip And Palate
    • /
    • v.13 no.2
    • /
    • pp.85-92
    • /
    • 2010
  • Vascular malformations (VMs) in the head and neck region are present at birth and grow commensurately with the child, they can result in significant cosmetic problems for the patient, and some may lead to even serious life threatening hemorrhage. Although the molecular mechanisms underlying the formation of these VMs remain unclear, lesions are known to result from abnormal development and morphogenesis. Histologically, there are no evidence of cellular proliferation, but rather progressive dilatation of abnormal channels, which VMs are designated to their prominent channel types such as capillary, venous, lymphatic, arterial, and combined malformations. VMs with an arterial component are rheologically fast-flow, whereas capillary, lymphatic, and venous components are slow-flow. In this article, we review the clinical presentations, diagnosis, and management of VMs of facial regions with author's embolization and surgical treatment cases.

  • PDF

Face recognition by using independent component analysis (독립 성분 분석을 이용한 얼굴인식)

  • 김종규;장주석;김영일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.48-58
    • /
    • 1998
  • We present a method that can recognize face images using independent component analysis that is used mainly for blind sources separation in signal processing. We assumed that a face image can be expressed as the sum of a set of statistically independent feature images, which was obtained by using independent component analysis. Face recognition was peformed by projecting the input image to the feature image space and then by comparing its projection components with those of stored reference images. We carried out face recognition experiments with a database that consists of various varied face images (total 400 varied facial images collected from 10 per person) and compared the performance of our method with that of the eigenface method based on principal component analysis. The presented method gave better results of recognition rate than the eigenface method did, and showed robustness to the random noise added in the input facial images.

  • PDF

Fast and Robust Face Detection based on CNN in Wild Environment (CNN 기반의 와일드 환경에 강인한 고속 얼굴 검출 방법)

  • Song, Junam;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1310-1319
    • /
    • 2016
  • Face detection is the first step in a wide range of face applications. However, detecting faces in the wild is still a challenging task due to the wide range of variations in pose, scale, and occlusions. Recently, many deep learning methods have been proposed for face detection. However, further improvements are required in the wild. Another important issue to be considered in the face detection is the computational complexity. Current state-of-the-art deep learning methods require a large number of patches to deal with varying scales and the arbitrary image sizes, which result in an increased computational complexity. To reduce the complexity while achieving better detection accuracy, we propose a fully convolutional network-based face detection that can take arbitrarily-sized input and produce feature maps (heat maps) corresponding to the input image size. To deal with the various face scales, a multi-scale network architecture that utilizes the facial components when learning the feature maps is proposed. On top of it, we design multi-task learning technique to improve detection performance. Extensive experiments have been conducted on the FDDB dataset. The experimental results show that the proposed method outperforms state-of-the-art methods with the accuracy of 82.33% at 517 false alarms, while improving computational efficiency significantly.