• Title/Summary/Keyword: face feature

Search Result 883, Processing Time 0.044 seconds

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Face Feature Extraction for Child Ocular Inspection and Diagnosis of Colics by Crying Analysis (소아 망진을 위한 얼굴 특징 추출 및 영아 산통 진단을 위한 울음소리 분석)

  • Cho Dong-Uk;Kim Bong-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.97-104
    • /
    • 2006
  • There is no method to control for the child efficiently when disease happens who cannot be able to express his symptoms. Therefore, doctor's diagnosis depends on inquiring from child's patients, that leads to wrong diagnosis result. For this, in this paper, we would like to develop child ocular inspection, auscultation diagnosis instruments, using Oriental medicine principle that living body signal of five organs and six hallow organs which reflects patients face and voice We would like to get more accurate diagnosis result for child's symptoms from doctor's intuition on the basis of diagnostic sight visualization, objectification, quantization itself. This paper develops color revision, YCbCr application, and face color selection and five sensory organs and nose or apex extraction method etc, in child ocular inspection by first work achievement sequence among the whole development systems. Also, in occasion of child auscultation, crying characteristics of colics through pitch, intensity and formant analysis is numerized and objectifies doctor's intuition through this. Finally, experiments are performed to verify the effectiveness of the proposed methods.

A Study on Fusionization of Woman Characters in Fusion Traditional Drama (사극드라마의 여자캐릭터의 분장특성 연구)

  • Kim, Yu-Gyoung;Cho, Ji-Na
    • Journal of Fashion Business
    • /
    • v.13 no.4
    • /
    • pp.60-76
    • /
    • 2009
  • Expression of woman characters in fusion traditional dramas plays a role of making the progress of dramas not bigoted and new. Especially, woman characters have a high weight as a heroine by an increase in their image in fusion traditional dramas. Expression of characters harmonizing modern with tradition has also given a help in reflecting a trend of the present times. Hair style and face makeup of woman characters in fusion traditional dramas are in the process of fusionization getting an effect from postmodernism. They are expressing the hair style of symbolization modern elements of hair style to traditional hair styles. They also expressed a neutral image with faded hair styles in the shaggy cut style and dye of neoplaticism. Neo-hippie style was changed into the style of naturalism and nationalism and the hair style braided in various branches as the one of Indians was changed into a primitive and national feature. It is producing a new style by permitting even a long-hair permanent wave hair style. Expression of straight hair style, a long-hair shaggy & bulging wave style and a hair style of neoplaticism, was distinguished. In the side of face makeup, they expressed its luxurious and splendid style by attaching great importance to its luster and are exposing images of characters by a smoky makeup emphasizing eye lines. Their face makeup was almost never separated from present dramas as using pearl shadow in a glossy lips makeup and color, which made it possible to express it more dramatically in fusion traditional dramas than in present dramas. In the event of the makeup element of fusion traditional dramas permitting diversity, the character makeup of fusion traditional dramas made a foundation to show people diverse elements of makeup by mix & match a present elements and past elements of historical research, which made it possible to express a unique makeup or a special makeup. Diverse makeup expressions were limited by reflection of illumination even in the existing videos. Therefore, 'Fusion' made it possible to express it more freely in fusion traditional dramas than in present dramas.

Design and Implementation of a Concentration-based Review Support Tool for Real-time Online Class Participants (실시간 온라인 수업 수강자들의 집중력 기반 복습 지원 도구의 설계 및 구현)

  • Tae-Hwan Kim;Dae-Soo Cho;Seung-Min Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.3
    • /
    • pp.521-526
    • /
    • 2023
  • Due to the recent pandemic, most educational systems are being conducted through online classes. Unlike face-to-face classes, it is even more difficult for learners to maintain concentration, and evaluating the learners' attitude toward the class is also challenging. In this paper, we proposed a real-time concentration-based review support system for learners in real-time video lectures that can be used in online classes. This system measured the learner's face, pupils, and user activity in real-time using the equipment used in the existing video system, and delivers real-time concentration measurement values to the instructor in various forms. At the same time, if the concentration measurement value falls below a certain level, the system alerted the learner and records the timestamp of the lecture. By using this system, instructors can evaluate the learners' participation in the class in real-time and help to improve their class abilities.

Facial Feature Verification System based on SVM Classifier (SVM 분류기에 의한 얼굴 특징 식별 시스템)

  • Park Kang Ryoung;Kim Jaihie;Lee Soo-youn
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.675-682
    • /
    • 2004
  • With the five-day workweek system in bank and the increased usage of ATM(Automatic Toller Machine), it is required that the financial crime using stolen credit card should be prevented. Though a CCTV camera is usually installed in near ATM, an intelligent criminal can cheat it disguising himself with sunglass or mask. In this paper, we propose facial feature verification system which can detect whether the user's face can be Identified or not, using image processing algorithm and SVM(Support Vector Machine). Experimental results show that FAR(Error Rate for accepting a disguised man as a non-disguised one) is 1% and FRR(Error Rate for rejecting a normal/non-disguised man as a disguised one) is 2% for training data. In addition, it shows the FAR of 2.5% and the FRR of 1.43% for test data.

Study of Traffic Sign Auto-Recognition (교통 표지판 자동 인식에 관한 연구)

  • Kwon, Mann-Jun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5446-5451
    • /
    • 2014
  • Because there are some mistakes by hand in processing electronic maps using a navigation terminal, this paper proposes an automatic offline recognition for traffic signs, which are considered ingredient navigation information. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which have been used widely in the field of 2D face recognition as computer vision and pattern recognition applications, was used to recognize traffic signs. First, using PCA, a high-dimensional 2D image data was projected to a low-dimensional feature vector. The LDA maximized the between scatter matrix and minimized the within scatter matrix using the low-dimensional feature vector obtained from PCA. The extracted traffic signs under a real-world road environment were recognized successfully with a 92.3% recognition rate using the 40 feature vectors created by the proposed algorithm.

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Implementation of User Gesture Recognition System for manipulating a Floating Hologram Character (플로팅 홀로그램 캐릭터 조작을 위한 사용자 제스처 인식 시스템 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.143-149
    • /
    • 2019
  • Floating holograms are technologies that provide rich 3D stereoscopic images in a wide space such as advertisement, concert. In addition, It is possible to reduce the 3D glasses inconvenience, eye strain, and space distortion, and to enjoy 3D images with excellent realism and existence. Therefore, this paper implements a user gesture recognition system for manipulating a floating hologram characters that can be used in a small space devices. The proposed method detects face region using haar feature-based cascade classifier, and recognizes the user gestures using a user gesture-occurred position information that is acquired from the gesture difference image in real time. And Each classified gesture information is mapped to the character motion in floating hologram for manipulating a character action. In order to evaluate the performance of the proposed user gesture recognition system for manipulating a floating hologram character, we make the floating hologram display devise, and measures the recognition rate of each gesture repeatedly that includes body shaking, walking, hand shaking, and jumping. As a results, the average recognition rate was 88%.

Facial Feature Detection and Facial Contour Extraction using Snakes (얼굴 요소의 영역 추출 및 Snakes를 이용한 윤곽선 추출)

  • Lee, Kyung-Hee;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.731-741
    • /
    • 2000
  • This paper proposes a method to detect a facial region and extract facial features which is crucial for visual recognition of human faces. In this paper, we extract the MER(Minimum Enclosing Rectangle) of a face and facial components using projection analysis on both edge image and binary image. We use an active contour model(snakes) for extraction of the contours of eye, mouth, eyebrow, and face in order to reflect the individual differences of facial shapes and converge quickly. The determination of initial contour is very important for the performance of snakes. Particularly, we detect Minimum Enclosing Rectangle(MER) of facial components and then determine initial contours using general shape of facial components within the boundary of the obtained MER. We obtained experimental results to show that MER extraction of the eye, mouth, and face was performed successfully. But in the case of images with bright eyebrow, MER extraction of eyebrow was performed poorly. We obtained good contour extraction with the individual differences of facial shapes. Particularly, in the eye contour extraction, we combined edges by first order derivative operator and zero crossings by second order derivative operator in designing energy function of snakes, and we achieved good eye contours. For the face contour extraction, we used both edges and grey level intensity of pixels in designing of energy function. Good face contours were extracted as well.

  • PDF

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.