• 제목/요약/키워드: facial recognition technology

검색결과 172건 처리시간 0.022초

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법 (A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image)

  • 김성훈;한기태
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권5호
    • /
    • pp.251-260
    • /
    • 2016
  • 얼굴 인식은 얼굴 영상에서 특징을 추출하고, 이를 다양한 알고리즘을 통해 학습하여 학습된 데이터와 새로운 얼굴 영상에서의 특징과 비교하여 사람을 인식하는 기술로 인식률을 향상시키기 위해서 다양한 방법들이 요구되는 기술이다. 얼굴 인식을 위해 학습 단계에서는 얼굴 영상들로 부터 특징 성분을 추출해야하며, 이를 위한 기존 얼굴 특징 성분 추출 방법에는 선형판별분석(Linear Discriminant Analysis, LDA)이 있다. 이 방법은 얼굴 영상들을 고차원의 공간에서 점들로 표현하고, 클래스 정보와 점의 분포를 분석하여 사람을 판별하기 위한 특징들을 추출하는데, 점의 위치가 얼굴 영상의 화소값에 의해 결정되므로 얼굴 영상에서 불필요한 영역 또는 변화가 자주 발생하는 영역이 포함되는 경우 잘못된 얼굴 특징이 추출될 수 있으며, 특히 일반 카메라 영상을 사용하여 얼굴인식을 수행하는 경우 얼굴과 카메라간의 거리에 따라 얼굴 크기가 다르게 나타나 최종적으로 얼굴 인식률이 저하된다. 따라서 본 논문에서는 이러한 문제점을 해결하기 위해 일반 카메라를 이용하여 얼굴 영역을 검출하고, 검출된 얼굴 영역에서 Gabor Filter를 이용하여 계산된 얼굴 외곽선을 통해 불필요한 영역을 제거한 후 일정 크기로 얼굴 영역 크기를 정규화하였다. 정규화된 얼굴 영상을 선형 판별 분석을 통해 얼굴 특징 성분을 추출하고, 인공 신경망을 통해 학습하여 얼굴 인식을 수행한 결과 기존의 불필요 영역이 포함된 얼굴 인식 방법보다 약 13% 정도의 인식률 향상이 가능하였다.

Fear and Surprise Facial Recognition Algorithm for Dangerous Situation Recognition

  • Kwak, NaeJoung;Ryu, SungPil;Hwang, IlYoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권2호
    • /
    • pp.51-55
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing dangerous situation. The proposed method firstly extracts the facial region using Harr-like technique from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, detects facial expression, and recognizes dangerous situation. The proposed method is evaluated for MUCT database image and web cam input. The proposed method produces good results of facial expression and discriminates dangerous situation well and the average recognition rate is 91.05%.

얼굴 표정 인식 기술의 동향과 향후 방향: 텍스트 마이닝 분석을 중심으로 (Trends and Future Directions in Facial Expression Recognition Technology: A Text Mining Analysis Approach)

  • 전인수;이병천;임수빈;문지훈
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.748-750
    • /
    • 2023
  • Facial expression recognition technology's rapid growth and development have garnered significant attention in recent years. This technology holds immense potential for various applications, making it crucial to stay up-to-date with the latest trends and advancements. Simultaneously, it is essential to identify and address the challenges that impede the technology's progress. Motivated by these factors, this study aims to understand the latest trends, future directions, and challenges in facial expression recognition technology by utilizing text mining to analyze papers published between 2020 and 2023. Our research focuses on discerning which aspects of these papers provide valuable insights into the field's recent developments and issues. By doing so, we aim to present the information in an accessible and engaging manner for readers, enabling them to understand the current state and future potential of facial expression recognition technology. Ultimately, our study seeks to contribute to the ongoing dialogue and facilitate further advancements in this rapidly evolving field.

안면 움직임 분석을 통한 단음절 음성인식 (Monosyllable Speech Recognition through Facial Movement Analysis)

  • 강동원;서정우;최진승;최재봉;탁계래
    • 전기학회논문지
    • /
    • 제63권6호
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

감성ICT 교육을 위한 얼굴감성 인식 시스템에 관한 연구 (A Study on the System of Facial Expression Recognition for Emotional Information and Communication Technology Teaching)

  • 송은지
    • 한국실천공학교육학회논문지
    • /
    • 제4권2호
    • /
    • pp.171-175
    • /
    • 2012
  • 최근 정보기술을 이용하여 인간의 감정을 인식하고 소통할 수 있는 ICT(Information and Communication Technology)의 연구가 증가하고 있다. 예를 들어 상대방의 마음을 읽기 위해서 그 사람과의 관계를 형성하고 활동을 해야만 하는 시대에서 사회의 디지털화로 그 경험이 디지털화 되어가며 마인드를 리딩 할 수 있는 디지털기기들이 출현하고 있다. 즉, 인간만이 예측할 수 있었던 감정을 디지털 기기가 대신해 줄 수 있게 된 것이다. 얼굴에서의 감정인식은 현재 연구되어지는 여러 가지 감정인식 중에서 효율적이고 자연스러운 휴먼 인터페이스로 기대되고 있다. 본 논문에서는 감성 ICT에 대한 고찰을 하고 그 사례로서 얼굴감정 인식 시스템에 대한 메카니즘을 살펴보고자 한다.

  • PDF

영상객체 spFACS ASM 알고리즘을 적용한 얼굴인식에 관한 연구 (ASM Algorithm Applid to Image Object spFACS Study on Face Recognition)

  • 최병관
    • 디지털산업정보학회논문지
    • /
    • 제12권4호
    • /
    • pp.1-12
    • /
    • 2016
  • Digital imaging technology has developed into a state-of-the-art IT convergence, composite industry beyond the limits of the multimedia industry, especially in the field of smart object recognition, face - Application developed various techniques have been actively studied in conjunction with the phone. Recently, face recognition technology through the object recognition technology and evolved into intelligent video detection recognition technology, image recognition technology object detection recognition process applies to skills through is applied to the IP camera, the image object recognition technology with face recognition and active research have. In this paper, we first propose the necessary technical elements of the human factor technology trends and look at the human object recognition based spFACS (Smile Progress Facial Action Coding System) for detecting smiles study plan of the image recognition technology recognizes objects. Study scheme 1). ASM algorithm. By suggesting ways to effectively evaluate psychological research skills through the image object 2). By applying the result via the face recognition object to the tooth area it is detected in accordance with the recognized facial expression recognition of a person demonstrated the effect of extracting the feature points.

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.