• 제목/요약/키워드: facial features

검색결과 635건 처리시간 0.021초

Analogical Face Generation based on Feature Points

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk
    • Journal of Multimedia Information System
    • /
    • 제6권1호
    • /
    • pp.15-22
    • /
    • 2019
  • There are many ways to perform face recognition. The first step of face recognition is the face detection step. If the face is not found in the first step, the face recognition fails. Face detection research has many difficulties because it can be varied according to face size change, left and right rotation and up and down rotation, side face and front face, facial expression, and light condition. In this study, facial features are extracted and the extracted features are geometrically reconstructed in order to improve face recognition rate in extracted face region. Also, it is aimed to adjust face angle using reconstructed facial feature vector, and to improve recognition rate for each face angle. In the recognition attempt using the result after the geometric reconstruction, both the up and down and the left and right facial angles have improved recognition performance.

A case of Noonan syndrome diagnosed using the facial recognition software (FACE2GENE)

  • Kim, Soo Kyoung;Jung, So Yoon;Bae, Seong Phil;Kim, Jieun;Lee, Jeongho;Lee, Dong Hwan
    • Journal of Genetic Medicine
    • /
    • 제16권2호
    • /
    • pp.81-84
    • /
    • 2019
  • Clinicians often have difficulties diagnosing patients with subtle phenotypes of Noonan syndrome phenotypes. Facial recognition technology can help in the identification of several genetic syndromes with facial dysmorphic features, especially those with mild or atypical phenotypes. A patient visited our clinic at 5 years of age with short stature. She was administered growth hormone treatment for 6 years, but her growth curve was still below the 3rd percentile. She and her mother had wide-spaced eyes and short stature, but there were no other remarkable features of a genetic syndrome. We analyzed their photographs using a smartphone facial recognition application. The results suggested Noonan syndrome; therefore, we performed targeted next-generation sequencing of genes associated with short stature. The results showed that they had a mutation on the PTPN11 gene known as the pathogenic mutation of Noonan syndrome. Facial recognition technology can help in the diagnosis of Noonan syndrome and other genetic syndromes, especially in patients with mild phenotypes.

얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합 (A flexible Feature Matching for Automatic Face and Facial Feature Points Detection)

  • 박호식;배철수
    • 한국정보통신학회논문지
    • /
    • 제7권4호
    • /
    • pp.705-711
    • /
    • 2003
  • 본 논문에서는 자동적으로 얼굴과 얼굴 특징점(FFPs:Facial Feature Points)을 검출하는 시스템을 제안하였다. 얼굴은 Gabor 특징에 의하여 지정된 특징점의 교점 그래프와 공간적 연결을 나타내는 에지 그래프로 표현하였으며 제안된 탄력적 특징 정합은 모델과 입력 영상에 상응하는 특징을 취하였다. 또한, 정합 모델은 국부적으로 경쟁적이고 전체적으로 협력적인 구조를 이룸으로서 영상공간에서 불규칙 확산 처리와 같은 역할을 하도록 하였으며, 복잡한 배경이나 자세의 변화, 그리고 왜곡된 얼굴 영상에서도 원활하게 동작하는 얼굴 식별 시스템을 구성함으로서 제안된 방법의 효율성을 증명하였다.

안면 움직임 분석을 통한 단음절 음성인식 (Monosyllable Speech Recognition through Facial Movement Analysis)

  • 강동원;서정우;최진승;최재봉;탁계래
    • 전기학회논문지
    • /
    • 제63권6호
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

후보영역의 밝기 분산과 얼굴특징의 삼각형 배치구조를 결합한 얼굴의 자동 검출 (Automatic Face Extraction with Unification of Brightness Distribution in Candidate Region and Triangle Structure among Facial Features)

  • 이칠우;최정주
    • 한국멀티미디어학회논문지
    • /
    • 제3권1호
    • /
    • pp.23-33
    • /
    • 2000
  • 본 논문에서는 복잡한 배경으로부터 자연스런 상태로 촬영된 얼굴들을 검출하는 알고리듬에 대해 기술한다. 이 방법은 얼굴영역이 적당한 크기의 블록내에서는 비교적 비슷한 밝기는 지닌다는 점에 착안하였다. 이 사실을 근거로, 먼저 영상을 계층적으로 블록화하고, 블록들의 밝기가 서로 비슷한 영역을 신속히 얼굴후보로 선택하여, 후보영역내에서 구체적인 얼굴특징을 찾는 단계적 처리방법을 도입하였다. 후보영역내의 구체적인 특징추출을 위해서 어둡고 좁은 영역을 강조하는 국소 휘도치 변환법을 사용하였으며 최종 판단을 위해서는 얼굴의 각 기관이 갖는 삼각형의 배치구조를 제약으로 사용하였다. 매우 간단한 방법으로 얼굴영역을 처리하였기 때문에 특징점들을 추출할 때 생기는 파라메터 설정문제를 피할수 있고 그 결과 파라메터값에 크게 의존하지 않는 안정된 시스템 구현이 가능하다.

  • PDF

대규모 비디오 감시 환경에서 프라이버시 보호를 위한 다중 레벨 특징 기반 얼굴검출 방법에 관한 연구 (Face Detection Using Multi-level Features for Privacy Protection in Large-scale Surveillance Video)

  • 이승호;문정익;김형일;노용만
    • 한국멀티미디어학회논문지
    • /
    • 제18권11호
    • /
    • pp.1268-1280
    • /
    • 2015
  • In video surveillance system, the exposure of a person's face is a serious threat to personal privacy. To protect the personal privacy in large amount of videos, an automatic face detection method is required to locate and mask the person's face. However, in real-world surveillance videos, the effectiveness of existing face detection methods could deteriorate due to large variations in facial appearance (e.g., facial pose, illumination etc.) or degraded face (e.g., occluded face, low-resolution face etc.). This paper proposes a new face detection method based on multi-level facial features. In a video frame, different kinds of spatial features are independently extracted, and analyzed, which could complement each other in the aforementioned challenges. Temporal domain analysis is also exploited to consolidate the proposed method. Experimental results show that, compared to competing methods, the proposed method is able to achieve very high recall rates while maintaining acceptable precision rates.

RNN을 이용한 Expressive Talking Head from Speech의 합성 (Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network)

  • 사쿠라이 류헤이;심바 타이키;야마조에 히로타케;이주호
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • 한국멀티미디어학회논문지
    • /
    • 제15권1호
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

Photogrammetric Analysis of Attractiveness in Indian Faces

  • Duggal, Shveta;Kapoor, DN;Verma, Santosh;Sagar, Mahesh;Lee, Yung-Seop;Moon, Hyoungjin;Rhee, Seung Chul
    • Archives of Plastic Surgery
    • /
    • 제43권2호
    • /
    • pp.160-171
    • /
    • 2016
  • Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population.

서프 및 하프변환 기반 운전자 동공 검출기법 (Face and Iris Detection Algorithm based on SURF and circular Hough Transform)

  • 아텀 렌스키;이종수
    • 대한전자공학회논문지SP
    • /
    • 제47권5호
    • /
    • pp.175-182
    • /
    • 2010
  • 본 논문에서는 얼굴과 동공을 검색하는 새로운 기법을 제시하며, 안전운행을 위한 운전자의 동공 감시에 적용한 실험결과를 포함하고 있다. 제시된 기법은 세 단계 주요 과정을 거치는데, 먼저 스킨칼라 세그먼테이션 기법으로 얼굴을 찾는 과정으로 이는 지금까지 사용된 휴리스틱모델이 아닌 학습과정 모델에 기반을 두고 있다. 다음에 얼굴 특징 세그먼테이션으로 눈, 입, 눈썹 등의 부분을 검출 하는데, 이를 위해 얼굴 각 부분에서 추출한 고유 특징들에 대한 PDF 추정을 사용하고 있다. 마지막으로 서큘러 하프 변환기법으로 눈 안의 동공을 찾아낸다. 제시된 기법을 조명이 다른 웹 얼굴 영상과 운전자의 CCD 얼굴 영상에 적용하여 동공을 찾아내는 실험을 하여, 높은 동공 검출율을 확인하였다.