• Title/Summary/Keyword: skin color extraction

Search Result 105, Processing Time 0.025 seconds

Robot vision system for face recognition using fuzzy inference from color-image (로봇의 시각시스템을 위한 칼라영상에서 퍼지추론을 이용한 얼굴인식)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.2
    • /
    • pp.106-110
    • /
    • 2014
  • This paper proposed the face recognition method which can be effectively applied to the robot's vision system. The proposed algorithm is recognition using hue extraction and feature point. hue extraction was using difference of skin color, pupil color, lips color. Features information were extraction from eye, nose and mouth using feature parameters of the difference between the feature point, distance ratio, angle, area. Feature parameters fuzzified data with the data generated by membership function, then evaluate the degree of similarity was the face recognition. The result of experiment are conducted with frontal color images of face as input images the received recognition rate of 96%.

Lip Region Extraction by Gaussian Classifier (가우스 분류기를 이용한 입술영역 추출)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.108-114
    • /
    • 2017
  • Lip reading is a field of image processing to assist the process of sound recognition. In some environment, the capture of sound signal usually has significant noise and therefore, the recognition rate of sound signal decreases. Lip reading can be a good feature for the increase of recognition rates. Conventional lip extraction methods have been proposed widely. Maia et. al. proposed a method by the sum of Cr and Cb. However, there are two problems as follows: the point with maximum saturation is not always regarded as lips region and the inner part of lips such as oral cavity and teeth can be classified as lips. To solve these problems, this paper proposes a method which adopts the histogram-based classifier for the extraction of lips region. The proposed method consists of two stages, learning and test. The amount of computation is minimized because this method has no color conversion. The performance of proposed method gives 66.8% of detection rate compared to 28% of conventional ones.

Automatic Face Identification System Using Adaptive Face Region Detection and Facial Feature Vector Classification

  • Kim, Jung-Hoon;Do, Kyeong-Hoon;Lee, Eung-Joo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1252-1255
    • /
    • 2002
  • In this paper, face recognition algorithm, by using skin color information of HSI color coordinate collected from face images, elliptical mask, fratures of face including eyes, nose and mouth, and geometrical feature vectors of face and facial angles, is proposed. The proposed algorithm improved face region extraction efficacy by using HSI information relatively similar to human's visual system along with color tone information about skin colors of face, elliptical mask and intensity information. Moreover, it improved face recognition efficacy with using feature information of eyes, nose and mouth, and Θ1(ACRED), Θ2(AMRED) and Θ 3(ANRED), which are geometrical face angles of face. In the proposed algorithm, it enables exact face reading by using color tone information, elliptical mask, brightness information and structural characteristic angle together, not like using only brightness information in existing algorithm. Moreover, it uses structural related value of characteristics and certain vectors together for the recognition method.

  • PDF

Face Region Extraction using Object Unit Method (객체 단위 방법을 사용한 얼굴 영역 추출)

  • 선영범;김진태;김동욱;이원형
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.6
    • /
    • pp.953-961
    • /
    • 2003
  • This paper suggests an efficient method to extract face regions from the com]]lex background. Input image is transformed to color space, where the data is independent of the brightness and several regions are extracted by skin color information. Each extracted region is processed as an object. Noise and overlapped objects ate removed. The candidate objects, faces are likely to be included in, are selected by checking the sizes of extracted objects, the XY ratio, and the distribution ratio of skin colors. In this processing, the objects without face are excluded out of candidate regions. The proposed method can be applied for successful extraction of face regions under various conditions such as face extraction with complex background, slanted faces, and face with accessories, etc.

  • PDF

Performance Comparison of Skin Color Detection Algorithms by the Changes of Backgrounds (배경의 변화에 따른 피부색상 검출 알고리즘의 성능 비교)

  • Jang, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • Accurately extracting skin color regions is very important in various areas such as face recognition and tracking, facial expression recognition, adult image identification, health-care, and so forth. In this paper, we evaluate the performances of several skin color detection algorithms in indoor environments by changing the distance between the camera and the object as well as the background colors of the object. The distance is from 60cm to 120cm and the background colors are white, black, orange, pink, and yellow, respectively. The algorithms that we use for the performance evaluation are Peer algorithm, NNYUV, NNHSV, LutYUV, and Kimset algorithm. The experimental results show that NNHSV, NNYUV and LutYUV algorithm are stable, but the other algorithms are somewhat sensitive to the changes of backgrounds. As a result, we expect that the comparative experimental results of this paper will be used very effectively when developing a new skin color extraction algorithm which are very robust to dynamic real environments.

The Robust Skin Color Correction Method in Distorted Saturation by the Lighting (조명에 의한 채도 왜곡에 강건한 피부 색상 보정 방법)

  • Hwang, Dae-Dong;Lee, Keunsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.2
    • /
    • pp.1414-1419
    • /
    • 2015
  • A method for detecting a skin region on the image is generally used to detect the color information. However, If saturation lowered, skin detection is difficult because hue information of the pixels is lost. So in this paper, we propose a method of correcting color of lower saturation of skin region images by the lighting. Color correction process of this method is saturation image acquisition and low-saturation region classification, segmentation, and the saturation of the split in the low saturation region extraction and color values, the color correction sequence. This method extracts the low saturation regions in the image and extract the color and saturation in the region and the surrounding region to produce a color similar to the original color. Therefore, the method of extracting the low saturation region should be correctly preceding. Because more accurate segmentation in the process of obtaining a low saturation regions, we use a multi-threshold method proposed Otsu in Hue values of the HSV color space, and create a binary image. Our experimental results for 170 portrait images show a possibility that the proposed method could be used efficiently preprocessing of skin color detection method, because the detection result of proposed method is 5.8% higher than not used it.

ID Face Detection Robust to Color Degradation and Partial Veiling (색열화 및 부분 은폐에 강인한 ID얼굴 검지)

  • Kim Dae Sung;Kim Nam Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.1-12
    • /
    • 2004
  • In this paper, we present an identificable face (n face) detection method robust to color degradation and partial veiling. This method is composed of three parts: segmentation of face candidate regions, extraction of face candidate windows, and decision of veiling. In the segmentation of face candidate regions, face candidate regions are detected by finding skin color regions and facial components such as eyes, a nose and a mouth, which may have degraded colors, from an input image. In the extraction of face candidate windows, face candidate windows which have high potentials of faces are extracted in face candidate regions. In the decision of veiling, using an eigenface method, a face candidate window whose similarity with eigenfaces is maximum is determined and whether facial components of the face candidate window are veiled or not is determined in the similar way. Experimental results show that the proposed method yields better the detection rate by about $11.4\%$ in test DB containing color-degraded faces and veiled ones than a conventional method without considering color degradation and partial veiling.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction (강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식)

  • Lee, Lae-Kyoung;An, Su-Yong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.