• Title/Summary/Keyword: Text color

Search Result 198, Processing Time 0.034 seconds

The Binarization of Text Regions in Natural Scene Images, based on Stroke Width Estimation (자연 영상에서 획 너비 추정 기반 텍스트 영역 이진화)

  • Zhang, Chengdong;Kim, Jung Hwan;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.27-34
    • /
    • 2012
  • In this paper, a novel text binarization is presented that can deal with some complex conditions, such as shadows, non-uniform illumination due to highlight or object projection, and messy backgrounds. To locate the target text region, a focus line is assumed to pass through a text region. Next, connected component analysis and stroke width estimation based on location information of the focus line is used to locate the bounding box of the text region, and each box of connected components. A series of classifications are applied to identify whether each CC(Connected component) is text or non-text. Also, a modified K-means clustering method based on an HCL color space is applied to reduce the color dimension. A text binarization procedure based on location of text component and seed color pixel is then used to generate the final result.

  • PDF

The Color Polarity Method for Binarization of Text Region in Digital Video (디지털 비디오에서 문자 영역 이진화를 위한 색상 극화 기법)

  • Jeong, Jong-Myeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.21-28
    • /
    • 2009
  • Color polarity classification is a process to determine whether the color of text is bright or dark and it is prerequisite task for text extraction. In this paper we propose a color polarity method to extract text region. Based on the observation for the text and background regions, the proposed method uses the ratios of sizes and standard deviations of bright and dark regions. At first, we employ Otsu's method for binarization for gray scale input region. The two largest segments among the bright and the dark regions are selected and the ratio of their sizes is defined as the first measure for color polarity classification. Again, we select the segments that have the smallest standard deviation of the distance from the center among two groups of regions and evaluate the ratio of their standard deviation as the second measure. We use these two ratio features to determine the text color polarity. The proposed method robustly classify color polarity of the text. which has shown by experimental result for the various font and size.

Text Extraction in HIS Color Space by Weighting Scheme

  • Le, Thi Khue Van;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.1
    • /
    • pp.31-36
    • /
    • 2013
  • A robust and efficient text extraction is very important for an accuracy of Optical Character Recognition (OCR) systems. Natural scene images with degradations such as uneven illumination, perspective distortion, complex background and multi color text give many challenges to computer vision task, especially in text extraction. In this paper, we propose a method for extraction of the text in signboard images based on a combination of mean shift algorithm and weighting scheme of hue and saturation in HSI color space for clustering algorithm. The number of clusters is determined automatically by mean shift-based density estimation, in which local clusters are estimated by repeatedly searching for higher density points in feature vector space. Weighting scheme of hue and saturation is used for formulation a new distance measure in cylindrical coordinate for text extraction. The obtained experimental results through various natural scene images are presented to demonstrate the effectiveness of our approach.

  • PDF

Color Recommendation for Text Based on Colors Associated with Words

  • Liba, Saki;Nakamura, Tetsuaki;Sakamoto, Maki
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.1
    • /
    • pp.21-29
    • /
    • 2012
  • In this paper, we propose a new method to select colors representing the meaning of text contents based on the cognitive relation between words and colors, Our method is designed on the previous study revealing the existence of crucial words to estimate the colors associated with the meaning of text contents, Using the associative probability of each color with a given word and the strength of color association of the word, we estimate the probability of colors associated with a given text. The goal of this study is to propose a system to recommend the cognitively plausible colors for the meaning of the input text. To build a versatile and efficient database used by our system, two psychological experiments were conducted by using news site articles. In experiment 1, we collected 498 words which were chosen by the participants as having the strong association with color. Subsequently, we investigated which color was associated with each word in experiment 2. In addition to those data, we employed the estimated values of the strength of color association and the colors associated with the words included in a very large corpus of newspapers (approximately 130,000 words) based on the similarity between the words obtained by Latent Semantic Analysis (LSA). Therefore our method allows us to select colors for a large variety of words or sentences. Finally, we verified that our system cognitively succeeded in proposing the colors associated with the meaning of the input text, comparing the correct colors answered by participants with the estimated colors by our method. Our system is expected to be of use in various types of situations such as the data visualization, the information retrieval, the art or web pages design, and so on.

Text Detection and Binarization using Color Variance and an Improved K-means Color Clustering in Camera-captured Images (카메라 획득 영상에서의 색 분산 및 개선된 K-means 색 병합을 이용한 텍스트 영역 추출 및 이진화)

  • Song Young-Ja;Choi Yeong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.205-214
    • /
    • 2006
  • Texts in images have significant and detailed information about the scenes, and if we can automatically detect and recognize those texts in real-time, it can be used in various applications. In this paper, we propose a new text detection method that can find texts from the various camera-captured images and propose a text segmentation method from the detected text regions. The detection method proposes color variance as a detection feature in RGB color space, and the segmentation method suggests an improved K-means color clustering in RGB color space. We have tested the proposed methods using various kinds of document style and natural scene images captured by digital cameras and mobile-phone camera, and we also tested the method with a portion of ICDAR[1] contest images.

Extraction of Text Regions from Spam-Mail Images Using Color Layers (색상레이어를 이용한 스팸메일 영상에서의 텍스트 영역 추출)

  • Kim Ji-Soo;Kim Soo-Hyung;Han Seung-Wan;Nam Taek-Yong;Son Hwa-Jeong;Oh Sung-Ryul
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.409-416
    • /
    • 2006
  • In this paper, we propose an algorithm for extracting text regions from spam-mail images using color layer. The CLTE(color layer-based text extraction) divides the input image into eight planes as color layers. It extracts connected components on the eight images, and then classifies them into text regions and non-text regions based on the component sizes. We also propose an algorithm for recovering damaged text strokes from the extracted text image. In the binary image, there are two types of damaged strokes: (1) middle strokes such as 'ㅣ' or 'ㅡ' are deleted, and (2) the first and/or last strokes such as 'ㅇ' or 'ㅁ' are filled with black pixels. An experiment with 200 spam-mail images shows that the proposed approach is more accurate than conventional methods by over 10%.

Scene Text Extraction in Natural Images Using Color Variance Feature (색 변화 특징을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 송영자;최영우
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1835-1838
    • /
    • 2003
  • Texts in natural images contain significant and detailed informations about the images. Thus, to extract those texts correctly, we suggest a text extraction method using color variance feature. Generally, the texts in images have color variations with the backgrounds. Thus, if we express those variations in 3 dimensional RGB color space, we can emphasize the text regions that can be hard to be captured with a method using intensity variations in the gray-level images. We can even make robust extraction results with the images contaminated by light variations. The color variations are measured by color variance in this paper. First, horizontal and vertical variance images are obtained independently, and we can fine that the text regions have high values of the variances in both directions. Then, the two images are logically ANDed to remove the non-text components with only one directional high variance. We have applied the proposed method to the multiple kinds of the natural images, and we confirmed that the proposed feature can help to find the text regions that can he missed with the following features - intensity variations in the gray-level images and/or color continuity in the color images.

  • PDF

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.

Text Area Extraction Method for Color Images Based on Labeling and Gradient Difference Method (레이블링 기법과 밝기값 변화에 기반한 컬러영상의 문자영역 추출 방법)

  • Won, Jong-Kil;Kim, Hye-Young;Cho, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.511-521
    • /
    • 2011
  • As the use of image input and output devices increases, the importance of extracting text area in color images is also increasing. In this paper, in order to extract text area of the images efficiently, we present a text area extraction method for color images based on labeling and gradient difference method. The proposed method first eliminates non-text area using the processes of labeling and filtering. After generating the candidates of text area by using the property that is high gradient difference in text area, text area is extracted using the post-processing of noise removal and text area merging. The benefits of the proposed method are its simplicity and high accuracy that is better than the conventional methods. Experimental results show that precision, recall and inverse ratio of non-text extraction (IRNTE) of the proposed method are 99.59%, 98.65% and 82.30%, respectively.

Title Extraction from Book Cover Images Using Histogram of Oriented Gradients and Color Information

  • Do, Yen;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.8 no.4
    • /
    • pp.95-102
    • /
    • 2012
  • In this paper, we present a technique to extract the title areas from book cover images. A typical book cover image may contain text, pictures, diagrams as well as complex and irregular background. In addition, the high variability of character features such as thickness, font, position, background and tilt of the text also makes the text extraction task more complicated. Therefore, we propose a two steps efficient method that uses Histogram of Oriented Gradients and color information to find the title areas. Firstly, text localization is carried out to find the title candidates. Finally, refinement process is performed to find the sufficient components of title areas. To obtain the best result, we also use other constraints about the size, ratio between the length and width of the title. We achieve encouraging results of extracted title regions from book cover images which prove the advantages and efficiency of the proposed method.