• Title/Summary/Keyword: Face Component

Search Result 440, Processing Time 0.026 seconds

PSYCHOLOGICAL EVALUATION AND THE APPLICABILITY OF THE IMPRESSION TRANSFER VECTOR METHOD FOR SYNTHESIZING HIGHER-ORDER FACIAL IMPRESSIONS

  • Sakuta, Yuiko;Ishi, Hanae;Akamatsu, Shigeru;Gyoba, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.689-694
    • /
    • 2009
  • We developed a facial image generating technique that can manipulate facial impressions. The present study applied this impression transferring method to higher-order impressions such as "elegance" or "attractiveness" and confirmed the psychological validity of this method using the semantic differential method. Subsequently, we applied this method to two types of cognitive experiments. First, we examined the contributions of texture and shape on the facial impressions by using those face images for which the impressions have already been quantitatively manipulated based on this method. Second, we used such stimuli to examine the effect of facial impressions and attractiveness on the "mere exposure effect." Thus, we concluded that the impression transfer vector method is an effective tool to quantitatively manipulate the facial impressions in various cognitive studies.

  • PDF

A FACE IMAGE GENERATION SYSTEM FOR TRANSFORMING THREE DIMENSIONS OF HIGHER-ORDER IMPRESSION

  • Ishi, Hanae;Sakuta, Yuiko;Akamatsu, Shigeru;Gyoba, Jiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.703-708
    • /
    • 2009
  • The present paper describes the application of an improved impression transfer vector method (Sakurai et al., 2007) to transform the three basic dimensions (Evaluation, Activity, and Potency) of higher-order impression. First, a set of shapes and surface textures of faces was represented by multi-dimensional vectors. Second, the variation among faces was coded in reduced parameters derived by applying principal component analysis. Third, a facial attribute along a given impression dimension was analyzed to select discriminative parameters from among principal components with higher sensitivity to impressions, and obtain an impression transfer vector. Finally, the parametric coordinates were changed by adding or subtracting the impression transfer vector and the image was manipulated so that its facial appearance clearly exhibits the transformed impression. A psychological rating experiment confirmed that the impression transfer vector modulated three dimensions of higher-order impression. We discussed the versatility of the impression transfer vector method.

  • PDF

Face Recognition based on PCA and LDA using Wavelet (웨이블릿을 이용한 PCA와 LDA 기반 얼굴인식)

  • Ahn, Hyo-Chang;Lee, June-Hwan;Rhee, Sang-Burm
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.731-732
    • /
    • 2006
  • Limitations on the Linear Discriminant Analysis (LDA) for face recognition, such as the loss of generalization and the computational infeasibility, are addressed and illustrated for small number of samples. The Principal Component Analysis (PCA) followed by the LDA mapping may be an alternative that can overcome this limitation. We also show that processing time is reduced by wavelet transform.

  • PDF

A study on the implementation of identification system using facial multi-feature (얼굴의 다중특징을 이용한 인증 시스템 구현)

  • 정택준;문용선;박병석
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.448-451
    • /
    • 2002
  • This study will offer multi-feature recognition instead of an using mono-feature to improve the accuracy of recognition. Each Feature can be found by following ways. For a face, the feature is calculated by the principal component analysis with wavelet multiresolution. For a lip, a filter is used to find out on equation to calculate the edges of the lips first. Then the other feature is calculated by the distance ratio of facial parameters. We've sorted backpropagation neural network and experimented with the inputs used above and then based on the experimental results we discuss the advantage and efficiency.

  • PDF

Analysis of CIELuv Color feature for the Segmentation of the Lip Region (입술영역 분할을 위한 CIELuv 칼라 특징 분석)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.27-34
    • /
    • 2019
  • In this paper, a new type of lip feature is proposed as distance metric in CIELUV color system. The performance of the proposed feature was tested on face image database, Helen dataset from University of Illinois. The test processes consists of three steps. The first step is feature extraction and second step is principal component analysis for the optimal projection of a feature vector. The final step is Otsu's threshold for a two-class problem. The performance of the proposed feature was better than conventional features. Performance metrics for the evaluation are OverLap and Segmentation Error. Best performance for the proposed feature was OverLap of 65% and 59 % of segmentation error. Conventional methods shows 80~95% for OverLap and 5~15% of segmentation error usually. In conventional cases, the face database is well calibrated and adjusted with the same background and illumination for the scene. The Helen dataset used in this paper is not calibrated or adjusted at all. These images are gathered from internet and therefore, there are no calibration and adjustment.

Exploring the Applicability from Extracurricular Design to Basic Engineering Design in Online : Focusing on the Case of IoT Extra-Curricular in Online (온라인 비교과 설계 교육과정에서 기초 설계 교육과정으로의 적용 가능성 탐색 : 온라인 IoT 비교과 교육과정 사례를 중심으로)

  • Hwang, Yunja;Huh, Ji-suk
    • Journal of Engineering Education Research
    • /
    • v.24 no.4
    • /
    • pp.30-40
    • /
    • 2021
  • The purpose of this study is to verify the effectiveness the IoT program in online, and explore the applicability of the design course in consideration of design elements and realistic constraints for engineering education accreditation in online. For this study, IoT programs developed based on online classes were operated, and the effectiveness as a subject was verified through satisfaction surveys, competency test, and interview of participating students. In addition, by presenting design elements and realistic constraints in a online environment required to apply to engineering design courses, it is expected that they can be used as basic data in developing and operating actual design curriculum.

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Face recognition using PCA and face direction information (PCA와 얼굴방향 정보를 이용한 얼굴인식)

  • Kim, Seung-Jae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.6
    • /
    • pp.609-616
    • /
    • 2017
  • In this paper, we propose an algorithm to obtain more stable and high recognition rate by using left and right rotation information of input image in order to obtain a stable recognition rate in face recognition. The proposed algorithm uses the facial image as the input information in the web camera environment to reduce the size of the image and normalize the information about the brightness and color to obtain the improved recognition rate. We apply Principal Component Analysis (PCA) to the detected candidate regions to obtain feature vectors and classify faces. Also, In order to reduce the error rate range of the recognition rate, a set of data with the left and right $45^{\circ}$ rotation information is constructed considering the directionality of the input face image, and each feature vector is obtained with PCA. In order to obtain a stable recognition rate with the obtained feature vector, it is after scattered in the eigenspace and the final face is recognized by comparing euclidean distant distances to each feature. The PCA-based feature vector is low-dimensional data, but there is no problem in expressing the face, and the recognition speed can be fast because of the small amount of calculation. The method proposed in this paper can improve the safety and accuracy of recognition and recognition rate faster than other algorithms, and can be used for real-time recognition system.

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.