• Title/Summary/Keyword: 얼굴 변형

Search Result 139, Processing Time 0.025 seconds

Sequential Registration of the Face Recognition candidate using SKL Algorithm (SKL 알고리즘을 이용한 얼굴인식 후보의 점진적 등록)

  • Han, Hag-Yong;Lee, Sung-Mok;Kwak, Boo-Dong;Choi, Won-Tae;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.320-325
    • /
    • 2010
  • This paper is about the method and procedure to register the candidate sequentially in the face recognition system using the PCA(Principal Components Analysis). We use the method to update the principal components sequentially with the SKL algorithm which is improved R-SVD algorithm. This algorithm enable us to solve the re-training problem of the increase the candidates number sequentially in the face recognition using the PCA. Also this algorithm can use in robust tracking system with the bright change based to the principal components. This paper proposes the procedure in the face recognition system which sequentially updates the principal components using the SKL algorithm. Then we compared the face recognition performance with the batch procedure for calculating the principal components using the standard KL algorithm and confirms the effects of the forgetting factor in the SKL algorithm experimentally.

Design of RBFNNs Pattern Classifier Realized with the Aid of Face Features Detection (얼굴 특징 검출에 의한 RBFNNs 패턴분류기의 설계)

  • Park, Chan-Jun;Kim, Sun-Hwan;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.120-126
    • /
    • 2016
  • In this study, we propose a method for effectively detecting and recognizing the face in image using RBFNNs pattern classifier and HCbCr-based skin color feature. Skin color detection is computationally rapid and is robust to pattern variation for face detection, however, the objects with similar colors can be mistakenly detected as face. Thus, in order to enhance the accuracy of the skin detection, we take into consideration the combination of the H and CbCr components jointly obtained from both HSI and YCbCr color space. Then, the exact location of the face is found from the candidate region of skin color by detecting the eyes through the Haar-like feature. Finally, the face recognition is performed by using the proposed FCM-based RBFNNs pattern classifier. We show the results as well as computer simulation experiments carried out by using the image database of Cambridge ICPR.

Generation of Changeable Face Template by Combining Independent Component Analysis Coefficients (독립성분 분석 계수의 합성에 의한 가변 얼굴 생체정보 생성 방법)

  • Jeong, Min-Yi;Lee, Chel-Han;Choi, Jeung-Yoon;Kim, Jai--Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.16-23
    • /
    • 2007
  • Changeable biometrics has been developed as a solution to problem of enhancing security and privacy. The idea is to transform a biometric signal or feature into a new one for the purposes of enrollment and matching. In this paper, we propose a changeable biometric system that can be applied to appearance based face recognition system. In the first step when using feature extraction, ICA(Independent Component Analysis) coefficient vectors extracted from an input face image are replaced randomly using their mean and variation. The transformed vectors by replacement are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by combination of the two transformed vectors. When this transformed template is compromised, it is replaced with new random numbers and a new scrambling rule. Because e transformed template is generated by e addition of two vectors, e original ICA coefficients could not be easily recovered from the transformed coefficients.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Automatic Synchronization of Separately-Captured Facial Expression and Motion Data (표정과 동작 데이터의 자동 동기화 기술)

  • Jeong, Tae-Wan;Park, Sang-II
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Kim, Sung-Sin;Bae, Hyun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.137-140
    • /
    • 2003
  • 캐리커처의 일반적인 의미는 어떤 사람이나 사물의 특징을 추출하여 익살스럽게 풍자한 그림이나 글이다. 다시 말해, 캐리커처는 사람의 얼굴에서 특징을 잡아 과장하거나 왜곡하여 그린 데생이라고 한다. 컴퓨터를 이용한 기존의 캐리커처 제작방법으로는, 입력 이미지 좌표의 통계적인 차이값을 이용하는 PICASSO System 방법[1], 제작자의 애매한 느낌을 퍼지 논리를 이용하여 표현하는 방법, 이미지를 와핑하는 방법, 여러 단계의 벡터 필드 변환을 이용하는 방법등이 연구되어 왔다. 본 논문에서는 실시간 또는 준비된 영상을 입력으로 받아 저장한 후, 네 단계의 과정으로 처리한 후 최종적으로 캐리커처된 이미지를 생성하게 된다. 각 단계별 처리 내용으로는 첫번째 단계에서는 영상에서 얼굴을 검출하고 두번째 단계에서는 특정 얼굴부위의 기하학적 정보를 좌표값으로 추출한다. 세번째 단계에서는 전 단계에서 얻은 좌표값으로 로컬 와핑 기법을 이용하여 영상을 변환한다. 네 번째 단계에서는 변형된 영상으로 퍼지 논리를 이용하여 보다 개선된 윤곽선 이미지로 변환하여 캐리커처 이미지를 얻는다. 본 논문에서는 영상 인식, 변환 및 윤곽선 검출 및 둥의 여러 가지 영상 처리 기법을 이용하여 기존의 캐리커처 제작 방식보다 간단하고, 복잡한 연산 과정이 없는 캐리커처 제작 시스템을 구현하였다.

  • PDF

Appearance Information Extraction and Shading for Realistic Caricature Generation (실사형 캐리커처 생성을 위한 형태 정보 추출 및 음영 함성)

  • Park, Yeon-Chool;Oh, Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.257-266
    • /
    • 2004
  • This paper proposes caricature generation system that uses shading mechanism that extracts textural features of face. Using this method, we can get more realistic caricature. Since this system If vector-based, the generated character's face has no size limit and constraint. so it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, owing to the vector file's advantage, It can be used in mobile environment as small file size This paper presents methods that generate vector-based face, create shade and synthesize the shade with the vector face.

Audio-Visual Integration based Multi-modal Speech Recognition System (오디오-비디오 정보 융합을 통한 멀티 모달 음성 인식 시스템)

  • Lee, Sahng-Woon;Lee, Yeon-Chul;Hong, Hun-Sop;Yun, Bo-Hyun;Han, Mun-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.707-710
    • /
    • 2002
  • 본 논문은 오디오와 비디오 정보의 융합을 통한 멀티 모달 음성 인식 시스템을 제안한다. 음성 특징 정보와 영상 정보 특징의 융합을 통하여 잡음이 많은 환경에서 효율적으로 사람의 음성을 인식하는 시스템을 제안한다. 음성 특징 정보는 멜 필터 캡스트럼 계수(Mel Frequency Cepstrum Coefficients: MFCC)를 사용하며, 영상 특징 정보는 주성분 분석을 통해 얻어진 특징 벡터를 사용한다. 또한, 영상 정보 자체의 인식률 향상을 위해 피부 색깔 모델과 얼굴의 형태 정보를 이용하여 얼굴 영역을 찾은 후 강력한 입술 영역 추출 방법을 통해 입술 영역을 검출한다. 음성-영상 융합은 변형된 시간 지연 신경 회로망을 사용하여 초기 융합을 통해 이루어진다. 실험을 통해 음성과 영상의 정보 융합이 음성 정보만을 사용한 것 보다 대략 5%-20%의 성능 향상을 보여주고 있다.

  • PDF

Performance Improvement of the Face Recognition Using the Properties of Wavelet Transform (웨이블릿 변환의 특성을 이용한 얼굴 인식 성능 개선)

  • Park, Kyung-Jun;Seo, Seok-Yong;Koh, Hyung-Hwa
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.726-735
    • /
    • 2013
  • This paper proposed face recognition methods about performance improvement of the face recognition using the properties of wavelet transform. Using discrete wavelet transform is Daubechies D4 filter that is similar to mother wavelet transform. For discrete wavelet transform method, In this case, by using LL subband only we can reduce processing time and amount of memory in recognition processing. To improve recognition ratio without further loss of 2 dimensional data changing, We applies 2D LDA. We perform SVM training algorithm to the feature vector obtained by 2D LDA. Experiment is performed using ORL database set and Yale database set by Matlab program. Test result shows that proposed method is superior to existence methods in recognition rate and performance time.

Face Recognition Using Histograms of Multi-resolution Segments Based on Discriminant Face Descriptor (판별 얼굴 기술자 기반의 다중 해상도 분할 영역 히스토그램을 이용한 얼굴인식 방법)

  • Lee, Jang-yoon;Lee, Yonggeol;Choi, Sang-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.97-105
    • /
    • 2016
  • We propose a face recognition method using the histograms of multi-resolution segments in order to effectively utilize the local information of faces. Since the variations in faces can occur in various sizes, the DFD method, which uses the histograms from the sub-regions of the same size, is not effective for obtaining local information of faces. In this paper, we first divide an image into several sub-regions and extract the DFD(Discriminant Face Descriptor) from each sub-region. By dividing each sub-region into several segments with multi-resolution and extracting histograms for each segment, we reduce the loss of local information in the process of recognition. The experimental results for the Yale B, AR, CAS-PEAL-R1 databases show that the proposed method improves the recognition performance compared to the existing DFD based method.