• Title/Summary/Keyword: Facial Image Processing

Search Result 161, Processing Time 0.03 seconds

A Study on Detection and Recognition of Facial Area Using Linear Discriminant Analysis

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.40-49
    • /
    • 2018
  • We propose a more stable robust recognition algorithm which detects faces reliably even in cases where there are changes in lighting and angle of view, as well it satisfies efficiency in calculation and detection performance. We propose detects the face area alone after normalization through pre-processing and obtains a feature vector using (PCA). The feature vector is applied to LDA and using Euclidean distance of intra-class variance and inter class variance in the 2nd dimension, the final analysis and matching is performed. Experimental results show that the proposed method has a wider distribution when the input image is rotated $45^{\circ}$ left / right. We can improve the recognition rate by applying this feature value to a single algorithm and complex algorithm, and it is possible to recognize in real time because it does not require much calculation amount due to dimensional reduction.

Real Time System Realization for Binocular Eyeball Tracking Mouse (실시간 쌍안구 추적 마우스 시스템 구현에 관한 연구)

  • Ryu Kwang-Ryol;Choi Duck-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.9
    • /
    • pp.1671-1678
    • /
    • 2006
  • A real time system realization for binocular eyeball tracking mouse on the computer monitor being far from 30-40cm is presented in the paper. The processing for searching eyeball and tracking the cursor are that a facial image is acquired by the small CCD camera, convert it into binary image, search for the eye two using the five region mask method in the eye surroundings and the side four points diagonal positioning method is searched the each iris. The tracking cursor is matched by measuring the iris central moving position. The cursor controlling is achieved by comparing two related distances between the iris maximum moving and the cursor moving to calculate the moving distance from gazing position and screen. The experimental results show that the binocular eyeball mouse system is simple and fast to be real time.

Recent advances in the reconstruction of cranio-maxillofacial defects using computer-aided design/computer-aided manufacturing

  • Oh, Ji-hyeon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.40
    • /
    • pp.2.1-2.7
    • /
    • 2018
  • With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.

A Pilot Study on Evoked Potentials by Visual Stimulation of Facial Emotion in Different Sasang Constitution Types (얼굴 표정 시각자극에 따른 사상 체질별 유발뇌파 예비연구)

  • Hwang, Dong-Uk;Kim, Keun-Ho;Lee, Yu-Jung;Lee, Jae-Chul;Kim, Myoyung-Geun;Kim, Jong-Yeol
    • Journal of Sasang Constitutional Medicine
    • /
    • v.22 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • 1. Objective There has been a few trials to diagnose Sasang Constitution by using EEG, but has not been studied intensively. For the purpose of practical diagnosis, the characteristics of EEG for each constitution should be studied first. Recently it has been shown that Sasang Constitution might be related to harm avoidance and novelty seeking in temperament and character profiles. Based on this finding, we propose a visual stimulation method to evoke a EEG response which may discriminate difference between constitutional groups. Through the experiment with this method, we tried to reveal the characteristics of EEG of each constitutional groups by the method of event-related potentials. 2. Methods: We used facial visual stimulation to verify the characteristics of EEG for each constitutional groups. To reveal characteristic in sensitivity and latency of response, we added several levels of noise to facial images. 6 male subjects(2 Taeeumin, 2 Soyangin, 2 Soeumin) participated in this study. All subjects are healthy 20's. To remove artifacts and slow modulation, we removed EOG contaminated data and renormalization is applied. To extract stimulation related components, normalized event-related potential method was used. 3. Results: From Oz channels, it is verified that facial image processing components are extracted. For lower level noise, components related to the visual stimulation were clearly shown in Oz, Pz, and Cz channels. Pz and Cz channels show differences among 3 constitutional groups in maximum around 200 msec. Especially moderate level of noise looks appropriate for diagnosis. 4. Conclusion: We verified that the visual stimulation with facial emotion might be a good candidate to evoke the differences between constitutional groups in EEG response. The differences shown in the experiment may imply that the process of emotion has distinct tendencies in latencies and sensitivity for each consitutional group. And this distinction might be related to the temperament profile of consitutional groups.

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Research on Pairwise Attention Reinforcement Model Using Feature Matching (특징 매칭을 이용한 페어와이즈 어텐션 강화 모델에 대한 연구)

  • Joon-Shik Lim;Yeong-Seok Ju
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.390-396
    • /
    • 2024
  • Vision Transformer (ViT) learns relationships between patches, but it may overlook important features such as color, texture, and boundaries, which can result in performance limitations in fields like medical imaging or facial recognition. To address this issue, this study proposes the Pairwise Attention Reinforcement (PAR) model. The PAR model takes both the training image and a reference image as input into the encoder, calculates the similarity between the two images, and matches the attention score maps of images with high similarity, reinforcing the matching areas of the training image. This process emphasizes important features between images and allows even subtle differences to be distinguished. In experiments using clock-drawing test data, the PAR model achieved a Precision of 0.9516, Recall of 0.8883, F1-Score of 0.9166, and an Accuracy of 92.93%. The proposed model showed a 12% performance improvement compared to API-Net, which uses the pairwise attention approach, and demonstrated a 2% performance improvement over the ViT model.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Study on the Development of Program for Measuring Preference of Portrait based on Sensibility (감성기반 인물사진 선호도 측정 프로그램 개발 연구)

  • Lee, Chang-Seop;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.178-187
    • /
    • 2018
  • This study aimed to develop a model of the program for automation measuring the preference of the portraits based on the relationship between the image quality factors and the preferences in the portraits for manufacturers aiming at high utilization of the users. in order to proceed with the evaluation, the image quality measurement was divided into objective and subjective items, and the evaluation was done through image processing and statistical methods. the image quality measurement items can be divided into objective evaluation items and subjective evaluation items. RSC Contrast, Dynamic Range and Noise were selected for the objective evaluation items, and the numerical values were statistically analyzed and evaluated through the program. Exposure, Color Tone, composition of person, position of person, and out of focus were selected for subjective evaluation items and evaluated by image processing method. By applying objective and subjective assessment items, the results were very accurate, with the results obtained by the developed program and the results of the actual visual inspection. but since the currently developed program can be evalua ted only after facial recognition of the person, future research will need to develop a program that can evaluate all kinds of portraits.

Development of Smart-Car Safety Management System Focused on Drunk Driving Control (음주제어를 중심으로 한 스마트 자동차 안전 관리 시스템 개발)

  • Lee, Se-Hwan;Cho, Dong-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7C
    • /
    • pp.565-575
    • /
    • 2012
  • In the modern everyday life, cars the largest proportion of smart features that require mounting in a variety of smart devices and smart methods on have been developed. In this paper, the smart car among the main core of the safety management system optional for the control of drinking and drowsiness, as part of system development, will be drinking if you start your car automatically is to develop a system to avoid driving. For this, through image processing to analyze the driver's seat of the driver's facial color how to determine whether or not drinking alcohol is proposed. In particular, the system developed in this paper determines whether or not drinking alcohol before the face images without the need for alcohol after only a unique color change of the face appears to target only way to determine whether drinking and actual alcohol control center of a smart car safety control management system can be applied effectively. The experiment was done in 30 patients after drinking appears face color changes of them. We also perform an analysis on the statistical significance of the experimental results to verify the effectiveness of the proposed method.