• Title/Summary/Keyword: Color facial Image

Search Result 161, Processing Time 0.026 seconds

Multimodal Digital Photographic Imaging System for Total Diagnostic Analysis of Skin Lesions: DermaVision-Pro (다모드 디지털 사진 영상 시스템을 이용한 피부 손상의 진단적 분석에 대한 연구 : DermaVision-Pro)

  • Bae, Young-Woo;Kim, Eun-Ji;Jung, Byung-Jo
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.153-154
    • /
    • 2008
  • Digital photographic analysis is currently considered as a routine procedure in clinic because periodic follow-up examinations can provide meaningful information for diagnosis. However, it is impractical to separately evaluate all suspicious lesions with conventional digital photographic systems, which have inconsistent characteristics of the environmental conditions. To address the issue, it is necessary for total diagnostic evaluation in clinic to integrate conventional systems. Previously, a multimodal digital photographic imaging system, which provides a conventional color image, parallel and cross polarization color images and a fluorescent color image, was developed for objective evaluation of facial skin lesions. Based on our previous study, we introduce a commercial product, "DermaVision-PRO," for routine use in clinical application in dermatology. We characterize the system and describe the image analysis methods for objective evaluation of skin lesions. In order to demonstrate the validity of the system in dermatology, sample images were obtained from subjects with various skin disorders, and image analysis methods were applied for objective evaluation of those lesions.

  • PDF

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Estimation and Watermarking of Motion Parameters in Model Based Image Coding

  • Park, Min-Chul
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1264-1267
    • /
    • 2002
  • In order to achieve an advanced human-computer interface system, it is necessary to analyze and synthesize facial motions just as they are in an interactive way, and to protect them from unwanted use and/or illegal use for their privacy, various uses in applications and the costs of obtaining motion parameters. To estimate facial motion, a method of using skin color distribution, luminance, and geometrical information of a face is employed. Digital watermarks are embedded into facial motion parameters and then these parameters are scrambled so that it cannot be understood. Experimental results show high accuracy and efficiency of the proposed estimation method and the usefulness of the proposed watermarking method.

  • PDF

Real Time Face Detection with TS Algorithm in Mobile Display (모바일 디스플레이에서 TS 알고리즘을 이용한 실시간 얼굴영역 검출)

  • Lee, Yong-Hwan;Kim, Young-Seop;Rhee, Sang-Bum;Kang, Jung-Won;Park, Jin-Yang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.4 no.1 s.10
    • /
    • pp.61-64
    • /
    • 2005
  • This study presents a new algorithm to detect the facial feature in a color image entered from the mobile device with complex backgrounds and undefined distance between camera's location and the face. Since skin color model with Hough transformation spent approximately 90$\%$ of running time to extract the fitting ellipse for detection of the facial feature, we have changed the approach to the simple geometric vector operation, called a TS(Triangle-Square) transformation. As the experimental results, this gives benefit of reduced run time. We have similar ratio of face detection to other methods with fast speed enough to be used on real-time identification system in mobile environments.

  • PDF

Face Detection Based on Distribution Map (분포맵에 기반한 얼굴 영역 검출)

  • Cho Han-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.1
    • /
    • pp.11-22
    • /
    • 2006
  • Recently face detection has actively been researched due to its wide range of applications, such as personal identification and security systems. In this paper, a new face detection method based on the distribution map is proposed. Face-like regions are first extracted by applying the skin color map with the frequency to a color image and then, possible eye regions are determined by using the pupil color distribution map within the face-like regions. This enables the reduction of space for finding facial features. Eye candidates are detected by means of a template matching method using weighted window, which utilizes the correlation values of the luminance component and chrominance components as feature vectors. Finally, a cost function for mouth detection and location information between the facial features are applied to each pair of the eye candidates for face detection. Experimental results show that the proposed method can achieve a high performance.

  • PDF

A Proposal for Effect Analysis Techniques of Kidney Hand Acupuncture through Face Image and Voice Signal Measurement (얼굴 영상 및 음성신호 측정을 통한 신장 수지침 효과 분석 기법의 제안)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.3C
    • /
    • pp.217-223
    • /
    • 2012
  • In this paper, we would like to propose techniques to analyze effect according to stimulation kidney associated hand acupuncture by applying technique to measure changes of facial image and voice signal. To this end, we measured color change of JIGAK(jaw) area associated kidney in facial image and voice signal stimulation before and after of kidney associated hand acupuncture. In addition, we measured changes of the first formant frequency bandwidth and Shimmer to element of voice signal analysis in connection with kidney in experiment. We can be measured reduction of the first formant frequency bandwidth and Shimmer, black of JIGAK area according to stimulation of kidney associated hand acupuncture. Finally, we would like to demonstrate objective effect of kidney associated hand acupuncture through the analysis of statistical significance by measurement techniques of facial image and voice signal.

Human Head Mouse System Based on Facial Gesture Recognition

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1591-1600
    • /
    • 2007
  • Camera position information from 2D face image is very important for that make the virtual 3D face model synchronize to the real face at view point, and it is also very important for any other uses such as: human computer interface (face mouth), automatic camera control etc. We present an algorithm to detect human face region and mouth, based on special color features of face and mouth in $YC_bC_r$ color space. The algorithm constructs a mouth feature image based on $C_b\;and\;C_r$ values, and use pattern method to detect the mouth position. And then we use the geometrical relationship between mouth position information and face side boundary information to determine the camera position. Experimental results demonstrate the validity of the proposed algorithm and the Correct Determination Rate is accredited for applying it into practice.

  • PDF

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF