• 제목/요약/키워드: 얼굴 특징추출

Search Result 588, Processing Time 0.035 seconds

Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm (형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식)

  • 최동선;이주신
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Features Detection in Face eased on The Model (모델 기반 얼굴에서 특징점 추출)

  • 석경휴;김용수;김동국;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.134-138
    • /
    • 2002
  • The human faces do not have distinct features unlike other general objects. In general the features of eyes, nose and mouth which are first recognized when human being see the face are defined. These features have different characteristics depending on different human face. In this paper, We propose a face recognition algorithm using the hidden Markov model(HMM). In the preprocessing stage, we find edges of a face using the locally adaptive threshold scheme and extract features based on generic knowledge of a face, then construct a database with extracted features. In training stage, we generate HMM parameters for each person by using the forward-backward algorithm. In the recognition stage, we apply probability values calculated by the HMM to input data. Then the input face is recognized by the euclidean distance of face feature vector and the cross-correlation between the input image and the database image. Computer simulation shows that the proposed HMM algorithm gives higher recognition rate compared with conventional face recognition algorithms.

  • PDF

Audio and Image based Emotion Recognition Framework on Real-time Video Streaming (실시간 동영상 스트리밍 환경에서 오디오 및 영상기반 감정인식 프레임워크)

  • Bang, Jaehun;Lim, Ho Jun;Lee, Sungyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.1108-1111
    • /
    • 2017
  • 최근 감정인식 기술은 다양한 IoT 센서 디바이스의 등장으로 단일 소스기반의 감정인식 기술 연구에서 멀티모달 센서기반 감정인식 연구로 변화하고 있으며, 특히 오디오와 영상을 이용한 감정인식 기술의 연구가 활발하게 진행되는 있다. 기존의 오디오 및 영상기반 감정신 연구는 두 개의 센서 테이터를 동시에 입력 저장한 오픈 데이터베이스를 활용하여 다른 이벤트 처리 없이 각각의 데이터에서 특징을 추출하고 하나의 분류기를 통해 감정을 인식한다. 이러한 기법은 사람이 말하지 않는 구간, 얼굴이 보이지 않는 구간의 이벤트 정보처리에 대한 대처가 떨어지고 두 개의 정보를 종합하여 하나의 감정도 도출하는 디시전 레벨의 퓨저닝 연구가 부족하다. 본 논문에서는 이러한 문제를 해결하기 위해 오디오 및 영상에 내포되어 있는 이벤트 정보를 추출하고 오디오 및 영상 기반의 분리된 인지모듈을 통해 감정들을 인식하며, 도출된 감정들을 시간단위로 통합하여 디시전 퓨전하는 실시간 오디오 및 영상기반의 감정인식 프레임워크를 제안한다.

Eye Detection Method Using Geometrical Features Between Eyebrows and Eyes in Smart Phone (스마트 폰에서 눈썹과 눈 간의 기하학적 특성을 이용한 눈 검출 방법)

  • Oh, Woongchun;Kang, Teaho;Kwak, Noyoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.41-44
    • /
    • 2014
  • 본 논문은 안드로이드 스마트 폰 환경에서 정중앙 블록과 주변 블록들 간의 블록 대비도를 이용해 눈썹을 검출한 후, 눈썹과 눈 간의 기하학적 특성을 이용해 눈의 위치를 찾는 눈 검출 방법에 관한 것이다. 제안된 방법은 Haar-like 특징과 AdaBoost 알고리즘 그리고 적응형 템플릿 정합을 이용해 입력 영상에서 얼굴 영역을 검출한 후, 이를 이용해 좌측 및 우측 눈썹과 눈 탐색 영역을 산정한다. 눈썹 영역의 Integral Image에서 눈썹에 해당하는 부분이 주변 블록들에 비해 상대적으로 어둡다는 특성을 이용해 눈썹을 추출한다. 이와 동시에 각 눈 탐색 영역의 Integral Image에서 동공 블록이 나머지 주변 블록들에 비해 상대적으로 어둡고 대칭성이 양호하다는 특성을 이용해 눈 후보 영역들을 추출한 후 최대 블록 대비도를 갖는 블록의 중심화소를 동공 후보점으로 삼는다. 이후 눈의 위치는 항상 눈썹 하단에 위치하며 그 떨어진 정도가 사람마다 크게 다르지 않다는 기하학적 특성을 이용해 눈 후보 영역에서 나온 동공 후보 점들을 검증한다. 제안된 방법은 거리 및 조명 변화 그리고 안경 착용에 강인한 것이 장점이다. 눈썹을 먼저 찾은 후 기하학적 특성을 이용해 좌우 동공 후보점 쌍의 적합성을 검증함으로써 안경과 눈을 효과적으로 구분할 수 있고 눈이 감겨 동공이 가려진 상태에도 감긴 눈의 위치를 검출할 수 있다.

  • PDF

Extraction of Important Areas Using Feature Feedback Based on PCA (PCA 기반 특징 되먹임을 이용한 중요 영역 추출)

  • Lee, Seung-Hyeon;Kim, Do-Yun;Choi, Sang-Il;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.461-469
    • /
    • 2020
  • In this paper, we propose a PCA-based feature feedback method for extracting important areas of handwritten numeric data sets and face data sets. A PCA-based feature feedback method is proposed by extending the previous LDA-based feature feedback method. In the proposed method, the data is reduced to important feature dimensions by applying the PCA technique, one of the dimension reduction machine learning algorithms. Through the weights derived during the dimensional reduction process, the important points of data in each reduced dimensional axis are identified. Each dimension axis has a different weight in the total data according to the size of the eigenvalue of the axis. Accordingly, a weight proportional to the size of the eigenvalues of each dimension axis is given, and an operation process is performed to add important points of data in each dimension axis. The critical area of the data is calculated by applying a threshold to the data obtained through the calculation process. After that, induces reverse mapping to the original data in the important area of the derived data, and selects the important area in the original data space. The results of the experiment on the MNIST dataset are checked, and the effectiveness and possibility of the pattern recognition method based on PCA-based feature feedback are verified by comparing the results with the existing LDA-based feature feedback method.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

A Method of Detection of Deepfake Using Bidirectional Convolutional LSTM (Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법)

  • Lee, Dae-hyeon;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1053-1065
    • /
    • 2020
  • With the recent development of hardware performance and artificial intelligence technology, sophisticated fake videos that are difficult to distinguish with the human's eye are increasing. Face synthesis technology using artificial intelligence is called Deepfake, and anyone with a little programming skill and deep learning knowledge can produce sophisticated fake videos using Deepfake. A number of indiscriminate fake videos has been increased significantly, which may lead to problems such as privacy violations, fake news and fraud. Therefore, it is necessary to detect fake video clips that cannot be discriminated by a human eyes. Thus, in this paper, we propose a deep-fake detection model applied with Bidirectional Convolution LSTM and Attention Module. Unlike LSTM, which considers only the forward sequential procedure, the model proposed in this paper uses the reverse order procedure. The Attention Module is used with a Convolutional neural network model to use the characteristics of each frame for extraction. Experiments have shown that the model proposed has 93.5% accuracy and AUC is up to 50% higher than the results of pre-existing studies.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.