• Title/Summary/Keyword: Color variance feature

Search Result 19, Processing Time 0.023 seconds

Scene Text Extraction in Natural Images Using Color Variance Feature (색 변화 특징을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 송영자;최영우
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1835-1838
    • /
    • 2003
  • Texts in natural images contain significant and detailed informations about the images. Thus, to extract those texts correctly, we suggest a text extraction method using color variance feature. Generally, the texts in images have color variations with the backgrounds. Thus, if we express those variations in 3 dimensional RGB color space, we can emphasize the text regions that can be hard to be captured with a method using intensity variations in the gray-level images. We can even make robust extraction results with the images contaminated by light variations. The color variations are measured by color variance in this paper. First, horizontal and vertical variance images are obtained independently, and we can fine that the text regions have high values of the variances in both directions. Then, the two images are logically ANDed to remove the non-text components with only one directional high variance. We have applied the proposed method to the multiple kinds of the natural images, and we confirmed that the proposed feature can help to find the text regions that can he missed with the following features - intensity variations in the gray-level images and/or color continuity in the color images.

  • PDF

Text Detection and Binarization using Color Variance and an Improved K-means Color Clustering in Camera-captured Images (카메라 획득 영상에서의 색 분산 및 개선된 K-means 색 병합을 이용한 텍스트 영역 추출 및 이진화)

  • Song Young-Ja;Choi Yeong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.205-214
    • /
    • 2006
  • Texts in images have significant and detailed information about the scenes, and if we can automatically detect and recognize those texts in real-time, it can be used in various applications. In this paper, we propose a new text detection method that can find texts from the various camera-captured images and propose a text segmentation method from the detected text regions. The detection method proposes color variance as a detection feature in RGB color space, and the segmentation method suggests an improved K-means color clustering in RGB color space. We have tested the proposed methods using various kinds of document style and natural scene images captured by digital cameras and mobile-phone camera, and we also tested the method with a portion of ICDAR[1] contest images.

Variance Recovery in Text Detection using Color Variance Feature (색 분산 특징을 이용한 텍스트 추출에서의 손실된 분산 복원)

  • Choi, Yeong-Woo;Cho, Eun-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.73-82
    • /
    • 2009
  • This paper proposes a variance recovery method for character strokes that can be missed in applying the previously proposed color variance approach in text detection of natural scene images. The previous method has a shortcoming of missing the color variance due to the fixed length of horizontal and vertical windows of variance detection when the character strokes are thick or long. Thus, this paper proposes a variance recovery method by using geometric information of bounding boxes of connected components and heuristic knowledge. We have tested the proposed method using various kinds of document-style and natural scene images such as billboards, signboards, etc captured by digital cameras and mobile-phone cameras. And we showed the improved text detection accuracy even in the images of containing large characters.

The Walkers Tracking Algorithm using Color Informations on Multi-Video Camera (다중 비디오카메라에서 색 정보를 이용한 보행자 추적)

  • 신창훈;이주신
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1080-1088
    • /
    • 2004
  • In this paper, the interesting moving objects tracking algorithm using color information on Multi-Video camera against variance of intensity, shape and background is proposed. Moving objects are detected by using difference image method and integral projection method to background image and objects image only with hue area, after converting RGB color coordination of image which is input from multi-video camera into HSI color coordination. Hue information of the detected moving area are segmented to 24 levels from $0^{\circ}$ to $360^{\circ}$. It is used to the feature parameter of the moving objects that are three segmented hue levels with the highest distribution and difference among three segmented hue levels. To examine propriety of the proposed method, human images with variance of intensity and shape and human images with variance of intensity, shape and background are targeted for moving objects. As surveillance results of the interesting human, hue distribution level variation of the detected interesting human at each camera is under 2 level, and it is confirmed that the interesting human is tracked and surveilled by using feature parameters at cameras, automatically.

Smoke Detection Method Using Local Binary Pattern Variance in RGB Contrast Imag (RGB Contrast 영상에서의 Local Binary Pattern Variance를 이용한 연기검출 방법)

  • Kim, Jung Han;Bae, Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.10
    • /
    • pp.1197-1204
    • /
    • 2015
  • Smoke detection plays an important role for the early detection of fire. In this paper, we suggest a newly developed method that generated LBPV(Local Binary Pattern Variance)s as special feature vectors from RGB contrast images can be applied to detect smoke using SVM(Support Vector Machine). The proposed method rearranges mean value of the block from each R, G, B channel and its intensity of the mean value. Additionally, it generates RGB contrast image which indicates each RGB channel’s contrast via smoke’s achromatic color. Uniform LBPV, Rotation-Invariance LBPV, Rotation-Invariance Uniform LBPV are applied to RGB Contrast images so that it could generate feature vector from the form of LBP. It helps to distinguish between smoke and non smoke area through SVM. Experimental results show that true positive detection rate is similar but false positive detection rate has been improved, although the proposed method reduced numbers of feature vector in half comparing with the existing method with LBP and LBPV.

Image Retrieval Using Spacial Color Correlation and Local Texture Characteristics (칼라의 공간적 상관관계 및 국부 질감 특성을 이용한 영상검색)

  • Sung, Joong-Ki;Chun, Young-Deok;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.103-114
    • /
    • 2005
  • This paper presents a content-based image retrieval (CBIR) method using the combination of color and texture features. As a color feature, a color autocorrelogram is chosen which is extracted from the hue and saturation components of a color image. As a texture feature, BDIP(block difference of inverse probabilities) and BVLC(block variation of local correlation coefficients) are chosen which are extracted from the value component. When the features are extracted, the color autocorrelogram and the BVLC are simplified in consideration of their calculation complexity. After the feature extraction, vector components of these features are efficiently quantized in consideration of their storage space. Experiments for Corel and VisTex DBs show that the proposed retrieval method yields 9.5% maximum precision gain over the method using only the color autucorrelogram and 4.0% over the BDIP-BVLC. Also, the proposed method yields 12.6%, 14.6%, and 27.9% maximum precision gains over the methods using wavelet moments, CSD, and color histogram, respectively.

Scene Text Extraction in Natural Images using Hierarchical Feature Combination and Verification (계층적 특징 결합 및 검증을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 최영우;김길천;송영자;배경숙;조연희;노명철;이성환;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.420-438
    • /
    • 2004
  • Artificially or naturally contained texts in the natural images have significant and detailed information about the scenes. If we develop a method that can extract and recognize those texts in real-time, the method can be applied to many important applications. In this paper, we suggest a new method that extracts the text areas in the natural images using the low-level image features of color continuity. gray-level variation and color valiance and that verifies the extracted candidate regions by using the high-level text feature such as stroke. And the two level features are combined hierarchically. The color continuity is used since most of the characters in the same text lesion have the same color, and the gray-level variation is used since the text strokes are distinctive in their gray-values to the background. Also, the color variance is used since the text strokes are distinctive in their gray-values to the background, and this value is more sensitive than the gray-level variations. The text level stroke features are extracted using a multi-resolution wavelet transforms on the local image areas and the feature vectors are input to a SVM(Support Vector Machine) classifier for the verification. We have tested the proposed method using various kinds of the natural images and have confirmed that the extraction rates are very high even in complex background images.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Night Time Leading Vehicle Detection Using Statistical Feature Based SVM (통계적 특징 기반 SVM을 이용한 야간 전방 차량 검출 기법)

  • Joung, Jung-Eun;Kim, Hyun-Koo;Park, Ju-Hyun;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.4
    • /
    • pp.163-172
    • /
    • 2012
  • A driver assistance system is critical to improve a convenience and stability of vehicle driving. Several systems have been already commercialized such as adaptive cruise control system and forward collision warning system. Efficient vehicle detection is very important to improve such driver assistance systems. Most existing vehicle detection systems are based on a radar system, which measures distance between a host and leading (or oncoming) vehicles under various weather conditions. However, it requires high deployment cost and complexity overload when there are many vehicles. A camera based vehicle detection technique is also good alternative method because of low cost and simple implementation. In general, night time vehicle detection is more complicated than day time vehicle detection, because it is much more difficult to distinguish the vehicle's features such as outline and color under the dim environment. This paper proposes a method to detect vehicles at night time using analysis of a captured color space with reduction of reflection and other light sources in images. Four colors spaces, namely RGB, YCbCr, normalized RGB and Ruta-RGB, are compared each other and evaluated. A suboptimal threshold value is determined by Otsu algorithm and applied to extract candidates of taillights of leading vehicles. Statistical features such as mean, variance, skewness, kurtosis, and entropy are extracted from the candidate regions and used as feature vector for SVM(Support Vector Machine) classifier. According to our simulation results, the proposed statistical feature based SVM provides relatively high performances of leading vehicle detection with various distances in variable nighttime environments.

Image Retrieval Using Spatial Color Correlation and Texture Characteristics Based on Local Fourier Transform (색상의 공간적인 상관관계와 국부적인 푸리에 변환에 기반한 질감 특성을 이용한 영상 검색)

  • Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.10-16
    • /
    • 2007
  • In this paper, we propose a technique for retrieving images using spatial color correlation and texture characteristics based on local fourier transform. In order to retrieve images, two new descriptors are proposed. One is a color descriptor which represents spatial color correlation. The other is a descriptor combining the proposed color descriptor with texture descriptor. Since most of existing color descriptors including color correlogram which represent spatial color correlation considered just color distribution between neighborhood pixels, the structural information of neighborhood pixels is not considered. Therefore, a novel color descriptor which simultaneously represents spatial color distribution and structural information is proposed. The proposed color descriptor represents color distribution of Min-Max color pairs calculating color distance between center pixel and neighborhood pixels in a block with 3x3 size. Also, the structural information which indicates directional difference between minimum color and maximum color is simultaneously considered. Then new color descriptor(min-max color correlation descriptor, MMCCD) containing mean and variance values of each directional difference is generated. While the proposed color descriptor includes by far smaller feature vector over color correlogram, the proposed color descriptor improves 2.5 % ${\sim}$ 13.21% precision rate, compared with color correlogram. In addition, we propose a another descriptor which combines the proposed color descriptor and texture characteristics based on local fourier transform. The combined method reduces size of feature vector as well as shows improved results over existing methods.