• Title/Summary/Keyword: Color pixels

Search Result 382, Processing Time 0.02 seconds

Improved face detection method at a distance with skin-color and variable edge-mask filtering (피부색과 가변 경계마스크 필터를 이용한 원거리 얼굴 검출 개선 방법)

  • Lee, Dong-Su;Yeom, Seok-Won;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2A
    • /
    • pp.105-112
    • /
    • 2012
  • Face detection at a distance faces is very challenging since images are often degraded by blurring and noise as well as low resolution. This paper proposes an improved face detection method with AdaBoost filtering and sequential testing stages with color and shape information. The conventional AdaBoost filter detects face regions but often generates false alarms. The face detection method is improved by adopting sequential testing stages in order to remove false alarms. The testing stages comprise skin-color test and variable edge-mask filtering. The skin-color filtering is composed of two steps, which involve rectangular window regions and individual pixels to generate binary face clusters. The size of the variable edge-mask is determined by the ellipse which is estimated from the face cluster. The validation of the horizontal and vertical ratio of the mask is also investigated. In the experiments, the efficacy of the proposed algorithm is proved by images captured by a CCTV and a smart-phone

Foreground object detection in projection display (프로젝션 화면에서 전경물체 검출)

  • Kang Hyun;Lee Chang Woo;Park Min Ho;Jung Keechul
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.1
    • /
    • pp.27-37
    • /
    • 2004
  • The detection of foreground objects in a projection display using color information can be hard due to changing lighting conditions and complex backgrounds. Accordingly, the current paper proposes a foreground object detection method using color information that is obtained from the input image to the Projector and an image captured by a camera above the projection display. After pixel correspondences between the two images are found by calibrating the geometry distortion and color distortion, the natural color variations are estimated for the projection display. Then, any pixel that has another variation not resulting from natural geometry or color distortion is considered a part of foreground objects, because a foreground object in a projection display changes the values of pixels. As shown by experimental results, the proposed foreground detection method is applicable to an interactive projection display system such as the DigitalDesk

The Robust Skin Color Correction Method in Distorted Saturation by the Lighting (조명에 의한 채도 왜곡에 강건한 피부 색상 보정 방법)

  • Hwang, Dae-Dong;Lee, Keunsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.2
    • /
    • pp.1414-1419
    • /
    • 2015
  • A method for detecting a skin region on the image is generally used to detect the color information. However, If saturation lowered, skin detection is difficult because hue information of the pixels is lost. So in this paper, we propose a method of correcting color of lower saturation of skin region images by the lighting. Color correction process of this method is saturation image acquisition and low-saturation region classification, segmentation, and the saturation of the split in the low saturation region extraction and color values, the color correction sequence. This method extracts the low saturation regions in the image and extract the color and saturation in the region and the surrounding region to produce a color similar to the original color. Therefore, the method of extracting the low saturation region should be correctly preceding. Because more accurate segmentation in the process of obtaining a low saturation regions, we use a multi-threshold method proposed Otsu in Hue values of the HSV color space, and create a binary image. Our experimental results for 170 portrait images show a possibility that the proposed method could be used efficiently preprocessing of skin color detection method, because the detection result of proposed method is 5.8% higher than not used it.

Banding Artifacts Reduction Method in Multitoning Based on Threshold Modulation of MJBNM (MJBNM의 임계값 변조를 이용한 멀티토닝에서의 띠 결점 감소 방법)

  • Park Tae-Yong;Lee Myong-Young;Son Chang-Hwan;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.40-47
    • /
    • 2006
  • This paper proposes a multitoning method using threshold modulation of MJBNM(Modified Jointly Blue Noise Mask) for banding artifacts reduction. As banding artifacts in multitoning appear as uniform dot distributions around the intermediate output levels, such multitone output results in discontinuity and visually unpleasing patterns in smooth transition regions. Therefore, to reduce these banding artifacts, the proposed method rearranges the dot distribution by introducing pixels in the neighborhood of output levels that occurs banding artifacts. First of all principal cause of banding artifacts are analyzed using mathematical description. Based on this analytical result, a threshold modulation technique of MJBNM which takes account of chrominance error and correlation between channels is applied. The original threshold range of MJBNM is first scaled linearly sot that the minimum and maximum of the scaled range include two pixel more than adjacent two output levels that cover an input value. In an input value is inside the vicinity of any intermediate output levels produce banding artifacts, the output is set to one of neighboring output levels based on the pointwise comparison result according to threshold modulation parameter that determines the dot density and distribution. In this case, adjacent pixels are introduced at the position where the scaled threshold values are located between two output levels and the minimum and maximum threshold values. Otherwise, a conventional multitoning method is applied. As a result, the proposed method effectively decreased the appearance of banding artifacts around the intermediate output levels. To evaluate the quality of the multitone result, HVS-WRMSE according to gray level for gray ramp image and S-CIELAB color difference for color ramp image are compared with other methods.

The Lines Extraction and Analysis of The Palm using Morphological Information of The Hand and Contour Tracking Method (손의 형태학적 정보와 윤곽선 추적 기법을 이용한 손금 추출 및 분석)

  • Kim, Kwang-Baek
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.2
    • /
    • pp.243-248
    • /
    • 2011
  • In this paper, we propose a new method to extract palm lines and read it with simple techniques from one photo. We use morphological information and 8-directional contour tracking algorithm. From the digitalized image, we transform original RGB information to YCbCr color model which is less sensitive to the brightness information. The palm region is extracted by simple threshold as Y:65~255, Cb:25~255, Cr:130~255 of skin color. Noise removal process is then followed with morphological information of the palm such that the palm area has more than quarter of the pixels and the rate of width vs height is more than 2:1 and 8-directional contour tracking algorithm. Then, the stretching algorithm and Sobel mask are applied to extract edges. Another morphological information that the meaningful edges(palm lines) have between 10 and 20 pixels is used to exclude noise edges and boundary lines of the hand from block binarized image. Main palm lines are extracted then by labeling method. This algorithm is quite effective even reading the palm from a photographed by a mobile phone, which suggests that this method could be used in various applications.

A New Face Tracking Method Using Block Difference Image and Kalman Filter in Moving Picture (동영상에서 칼만 예측기와 블록 차영상을 이용한 얼굴영역 검출기법)

  • Jang, Hee-Jun;Ko, Hye-Sun;Choi, Young-Woo;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2005
  • When tracking a human face in the moving pictures with complex background under irregular lighting conditions, the detected face can be larger including background or smaller including only a part of the face. Even background can be detected as a face area. To solve these problems, this paper proposes a new face tracking method using a block difference image and a Kalman estimator. The block difference image allows us to detect even a small motion of a human and the face area is selected using the skin color inside the detected motion area. If the pixels with skin color inside the detected motion area, the boundary of the area is represented by a code sequence using the 8-neighbor window and the head area is detected analysing this code. The pixels in the head area is segmented by colors and the region most similar with the skin color is considered as a face area. The detected face area is represented by a rectangle including the area and its four vertices are used as the states of the Kalman estimator to trace the motion of the face area. It is proved by the experiments that the proposed method increases the accuracy of face detection and reduces the fare detection time significantly.

Estimation of Illuminant Chromaticity by Equivalent Distance Reference Illumination Map and Color Correlation (균등거리 기준 조명 맵과 색 상관성을 이용한 조명 색도 추정)

  • Kim Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.267-274
    • /
    • 2023
  • In this paper, a method for estimating the illuminant chromaticity of a scene for an input image is proposed. The illuminant chromaticity is estimated using the illuminant reference region. The conventional method uses a certain number of reference lighting information. By comparing the chromaticity distribution of pixels from the input image with the chromaticity set prepared in advance for the reference illuminant, the reference illuminant with the largest overlapping area is regarded as the scene illuminant for the corresponding input image. In the process of calculating the overlapping area, the weights for each reference light were applied in the form of a Gaussian distribution, but a clear standard for the variance value could not be presented. The proposed method extracts an independent reference chromaticity region from a given reference illuminant, calculates the characteristic values in the r-g chromaticity plane of the RGB color coordinate system for all pixels of the input image, and then calculates the independent chromaticity region and features from the input image. The similarity is evaluated and the illuminant with the highest similarity was estimated as the illuminant chromaticity component of the image. The performance of the proposed method was evaluated using the database image and showed an average of about 60% improvement compared to the conventional basic method and showed an improvement performance of around 53% compared to the conventional Gaussian weight of 0.1.

Content-based Image Retrieval Using Color Adjacency and Gradient (칼라 인접성과 기울기를 이용한 내용 기반 영상 검색)

  • Jin, Hong-Yan;Lee, Ho-Young;Kim, Hee-Soo;Kim, Gi-Seok;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.104-115
    • /
    • 2001
  • A new content-based color image retrieval method integrating the features of the color adjacency and the gradient is proposed in this paper. As the most used feature of color image, color histogram has its own advantages that it is invariant to the changes in viewpoint and the rotation of the image etc., and the computation of the feature is simple and fast. However, it is difficult to distinguish those different images having similar color distributions using histogram-based image retrieval, because the color histogram is generated on uniformly quantized colors and the histogram itself contains no spatial information. And another shortcoming of the histogram-based image retrieval is the storage of the features is usually very large. In order to prevent the above drawbacks, the gradient that is the largest color difference of neighboring pixels is calculated in the proposed method instead of the uniform quantization which is commonly used at most histogram-based methods. And the color adjacency information which indicates major color composition feature of an image is extracted and represented as a binary form to reduce the amount of feature storage. The two features are integrated to allow the retrieval more robust to the changes of various external conditions.

  • PDF

Edge-based spatial descriptor for content-based Image retrieval (내용 기반 영상 검색을 위한 에지 기반의 공간 기술자)

  • Kim, Nac-Woo;Kim, Tae-Yong;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.1-10
    • /
    • 2005
  • Content-based image retrieval systems are being actively investigated owing to their ability to retrieve images based on the actual visual content rather than by manually associated textual descriptions. In this paper, we propose a novel approach for image retrieval based on edge structural features using edge correlogram and color coherence vector. After color vector angle is applied in the pre-processing stage, an image is divided into two image parts (high frequency image and low frequency image). In low frequency image, the global color distribution of smooth pixels is extracted by color coherence vector, thereby incorporating spatial information into the proposed color descriptor. Meanwhile, in high frequency image, the distribution of the gray pairs at an edge is extracted by edge correlogram. Since the proposed algorithm includes the spatial and edge information between colors, it can robustly reduce the effect of the significant change in appearance and shape in image analysis. The proposed method provides a simple and flexible description for the image with complex scene in terms of structural features of the image contents. Experimental evidence suggests that our algorithm outperforms the recently histogram refinement methods for image indexing and retrieval. To index the multidimensional feature vectors, we use R*-tree structure.

Software development for the visualization of brain fiber tract by using 24-bit color coding in diffusion tensor image

  • Oh, Jung-Su;Song, In-Chan;Ik hwan Cho;Kim, Jong-Hyo;Chang, Kee-Hyun;Park, Kwang-Suk
    • Proceedings of the KSMRM Conference
    • /
    • 2002.11a
    • /
    • pp.133-133
    • /
    • 2002
  • Purpose: The purpose of paper is to implement software to visualize brain fiber tract using a 24-bit color coding scheme and to test its feasibility. Materials and Methods: MR imaging was performed on GE 1.5 T Signa scanner. For diffusion tensor image, we used a single shot spin-echo EPI sequence with 7 non-colinear pulsed-field gradient directions: (x, y, z):(1,1,0),(-1,1,0),(1,0,1),(-1,0,1),(0,1,1),(0,1,-1) and without diffusion gradient. B-factor was 500 sec/$\textrm{mm}^2$. Acquisition parameters are as follows: TUTE=10000ms/99ms, FOV=240mm, matrix=128${\times}$128, slice thickness/gap=6mm/0mm, total slice number=30. Subjects consisted of 10 normal young volunteers (age:21∼26 yrs, 5 men, 5 women). All DTI images were smoothed with Gaussian kernel with the FWHM of 2 pixels. Color coding schemes for visualization of directional information was as follows. HSV(Hue, Saturation, Value) color system is appropriate for assigning RGB(Red, Green, and Blue) value for every different directions because of its volumetric directional expression. Each of HSV are assigned due to (r,$\theta$,${\Phi}$) in spherical coordinate. HSV calculated by this way can be transformed into RGB color system by general HSV to RGB conversion formula. Symmetry schemes: It is natural to code the antipodal direction to be same color(antipodal symmetry). So even with no symmetry scheme, the antipodal symmetry must be included. With no symmetry scheme, we can assign every different colors for every different orientation.(H =${\Phi}$, S=2$\theta$/$\pi$, V=λw, where λw is anisotropy). But that may assign very discontinuous color even between adjacent yokels. On the other hand, Full symmetry or absolute value scheme includes symmetry for 180$^{\circ}$ rotation about xy-plane of color coordinate (rotational symmetry) and for both hemisphere (mirror symmetry). In absolute value scheme, each of RGB value can be expressed as follows. R=λw|Vx|, G=λw|Vy|, B=λw|Vz|, where (Vx, Vy, Vz) is eigenvector corresponding to the largest eigenvalue of diffusion tensor. With applying full symmetry or absolute value scheme, we can get more continuous color coding at the expense of coding same color for symmetric direction. For better visualization of fiber tract directions, Gamma and brightness correction had done. All of these implementations were done on the IDL 5.4 platform.

  • PDF