• Title/Summary/Keyword: YIQ color space

Search Result 12, Processing Time 0.026 seconds

Colormap Construction and Combination Method between Colormaps (컬러맵의 생성과 컬러맵간의 결합 방법)

  • Kim, Jin-Hong;Jo, Cheol-Hyo;Kim, Du-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.541-550
    • /
    • 1994
  • A true color image is needed many data on the occasion of the transmission and storage. Therefore, we want to describe color image by a minority data without unreasonableness at eyesight. In this paper, it is presented 256 colormap construction method in RGB, YIQ/YUV space and common colormap expression method at merge between colormaps by reason of dissimilar original color image to display at a monitor for each other colormap at the same time. In comparison with processed result in RGB, YIQ/YUV space, it was measured by PSNR, standard variation, and edge preservation rate using sobel operator. Process time is 3second in colormap construction and 2second in merge between colormaps. In the PSNR value, RGB space has higher 0.15, 0.34 on an average than YIQ and YUV spae. Standard variation has lower in 0.15, 0.41 on an average than Yiq and YUV space. But in the data compression, YIQ/YUV space have about 1/3 compression efficiency than RGB space by reason of use to only 4bit of 8bit in color component.

  • PDF

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Human Hand Detection Using Color Vision (컬러 시각을 이용한 사람 손의 검출)

  • Kim, Jun-Yup;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.28-33
    • /
    • 2012
  • The visual sensing of human hands plays an important part in many man-machine interaction/interface systems. Most existing visionbased hand detection techniques depend on the color cues of human skin. The RGB color image from a vision sensor is often transformed to another color space as a preprocessing of hand detection because the color space transformation is assumed to increase the detection accuracy. However, the actual effect of color space transformation has not been well investigated in literature. This paper discusses a comparative evaluation of the pixel classification performance of hand skin detection in four widely used color spaces; RGB, YIQ, HSV, and normalized rgb. The experimental results indicate that using the normalized red-green color values is the most reliable under different backgrounds, lighting conditions, individuals, and hand postures. The nonlinear classification of pixel colors by the use of a multilayer neural network is also proposed to improve the detection accuracy.

Traffic Sign Recognition Using Color Information and Error Back Propagation Algorithm (컬러정보와 오류역전파 알고리즘을 이용한 교통표지판 인식)

  • Bang, Gul-Won;Kang, Dea-Wook;Cho, Wan-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.809-818
    • /
    • 2007
  • In this thesis, the color information is used to extract the traffic sign territory, and for recognizing the extracted image, it proposes the traffic sign recognition system that applies the error back propagation algorithm. The proposed method analyzes the color of traffic sign to extract and recognize the possible territory of traffic sign. The method of extracting the possible territory is to use the characteristics of YUV, YIQ, and CMYK color space from the RGB color space. Morphology uses the geometric characteristics of traffic sign to make the image segmentation. The recognition of traffic signs can be recognized by using the error back propagation algorithm. As a result of the experiment, the proposed system has proven its outstanding capability in extraction and recognition of candidate territory without the influence of differences in lighting and input image in various sizes.

Face Detection Algorithm for Video Conference Camera Control (화상회의 카메라 제어를 위한 안면 검출 알고리듬)

  • 온승엽;박재현;박규식;이준희
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.218-221
    • /
    • 2000
  • In this paper, we propose a new algorithm to detect human faces for controling a camera used in video conference. We model the distribution of skin color and set up the standard skin color in YIQ color space. An input video frame image is segmented into skin and non-skin segments by comparing the standard skin color and each pixels in the input video frame. Then, shape filler is applied to select face segments from skin segments. Our algorithm detects human faces in real time to control a camera to capture a human face with a proper size and position.

  • PDF

Face Region Detection using a Color Union Model and The Levenberg-Marquadt Algorithm (색상 조합 모델과 LM(Levenberg-Marquadt)알고리즘을 이용한 얼굴 영역 검출)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.255-262
    • /
    • 2007
  • This paper proposes an enhanced skin color-based detection method to find a region of human face in color images. The proposed detection method combines three color spaces, RGB, $YC_bC_r$, YIQ and builds color union histograms of luminance and chrominance components respectively. Combined color union histograms are then fed in to the back-propagation neural network for training and Levenberg-Marquadt algorithm is applied to the iteration process of training. Proposed method with Levenberg-Marquadt algorithm applied to training process of neural network contributes to solve a local minimum problem of back-propagation neural network, one of common methods of training for face detection, and lead to make lower a detection error rate. Further, proposed color-based detection method using combined color union histograms which give emphasis to chrominance components divided from luminance components inputs more confident values at the neural network and shows higher detection accuracy in comparison to the histogram of single color space. The experiments show that these approaches perform a good capability for face region detection, and these are robust to illumination conditions.

A Comparison of Superpixel Characteristics based on SLIC(Simple Linear Iterative Clustering) for Color Feature Spaces (칼라특징공간별 SLIC기반 슈퍼픽셀의 특성비교)

  • Lee, Jeong Hwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.151-160
    • /
    • 2014
  • In this paper, a comparison of superpixel characteristics based on SLIC(simple linear iterative clustering) for several color feature spaces is presented. Computer vision applications have come to rely increasingly on superpixels in recent years. Superpixel algorithms group pixels into perceptually meaningful atomic regions, which can be used to replace the rigid structure of the pixel grid. A superpixel is consist of pixels with similar features such as luminance, color, textures etc. Thus superpixels are more efficient than pixels in case of large scale image processing. Generally superpixel characteristics are described by uniformity, boundary precision and recall, compactness. However previous methods only generate superpixels a special color space but lack researches on superpixel characteristics. Therefore we present superpixel characteristics based on SLIC as known popular. In this paper, Lab, Luv, LCH, HSV, YIQ and RGB color feature spaces are used. Uniformity, compactness, boundary precision and recall are measured for comparing characteristics of superpixel. For computer simulation, Berkeley image database(BSD300) is used and Lab color space is superior to the others by the experimental results.

A Study on Fire Flame Detection Performance in the Images of Various Color Spaces (다양한 컬러 공간에 따른 영상 내 화염 검출 성능 연구)

  • Choi, Byung-Soo;Kim, Jeong-Dae;Do, Yong-Tae
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.284-286
    • /
    • 2012
  • There has been increasing attention about the prevention and counter-measure of disasters. Particularly, for the case of fire disaster, early detection reduces the damage caused by fire significantly and effective detection method is important. Since most existing detectors need to be located at a close distance to fire, analyzing camera images to find fire becomes active research topic. In this paper, we analyze the color characteristics of fire images in various color spaces and report the experimental detection results. The best result is 77.8% success rate in YIQ space.

  • PDF